By Topic

An Artificial Neural Network at Device Level Using Simplified Architecture and Thin-Film Transistors

Sign In

Cookies must be enabled to login.After enabling cookies , please use refresh or reload or ctrl+f5 on the browser for the login options.

Formats Non-Member Member
$31 $13
Learn how you can qualify for the best price for this item!
Become an IEEE Member or Subscribe to
IEEE Xplore for exclusive pricing!
close button

puzzle piece

IEEE membership options for an individual and IEEE Xplore subscriptions for an organization offer the most affordable access to essential journal articles, conference papers, standards, eBooks, and eLearning courses.

Learn more about:

IEEE membership

IEEE Xplore subscriptions

7 Author(s)
Kasakawa, T. ; Dept. of Electron. & Inf., Ryukoku Univ., Otsu, Japan ; Tabata, H. ; Onodera, R. ; Kojima, H.
more authors

We show a neural network at the device level that uses a simplified architecture and thin-film transistors (TFTs). First, we form a neuron unit from eight transistors and reduce the synapse unit to only one transistor by employing characteristic variations of the synapse transistors to adjust the connection strength. Second, we compose a “local interconnective neural network” that is optimal for integrated circuits, in which we connect each neuron to four neighboring neurons through pairs of synapses: A “cooperatory synapse” and an “oppository synapse.” Third, we fabricate the neural network using thin-film technology, which is expected to be widely used for giant microelectronics. Although the device architecture is quite different from conventional systems, the neural network is confirmed to unsupervisedly learn any logic, such as or and xor, which is not linearly separable and is a standard logic used to test the performance of a neural network. Using this simplified architecture and TFTs, a large-scale neural network comparable with the human brain may be integrated.

Published in:

Electron Devices, IEEE Transactions on  (Volume:57 ,  Issue: 10 )