We show a neural network at the device level that uses a simplified architecture and thin-film transistors (TFTs). First, we form a neuron unit from eight transistors and reduce the synapse unit to only one transistor by employing characteristic variations of the synapse transistors to adjust the connection strength. Second, we compose a “local interconnective neural network” that is optimal for integrated circuits, in which we connect each neuron to four neighboring neurons through pairs of synapses: A “cooperatory synapse” and an “oppository synapse.” Third, we fabricate the neural network using thin-film technology, which is expected to be widely used for giant microelectronics. Although the device architecture is quite different from conventional systems, the neural network is confirmed to unsupervisedly learn any logic, such as or and xor, which is not linearly separable and is a standard logic used to test the performance of a neural network. Using this simplified architecture and TFTs, a large-scale neural network comparable with the human brain may be integrated.