Skip to content

New Computers Act Like A Brain Made of Light

Researches from UCLA have discovered that machine learning algorithms can be etched on to solid plates that diffract light and process information with little to no power. Each plate essentially behaves like different layers of the brain – re-directing beams of light similar to how a network of neurons transmit electricity to and fro.

Machine learning is a sequence of layered formulas created from noticing patterns in a set of data. The initial outline is provided by human beings in the form of several examples. Then by using these algorithms as a reference point computers can sort through data and begin to learn more associations automatically.

For example, in the case of facial recognition, computer scientists will use proportional reference points defined by golden ratio – a common number found in the natural world. Then by using this “skeleton” as a scaffold for processing machines can sort through many different faces, getting better at identifying them the more data is fed to the system.

Image result for machine learning faces golden ratioImage result for machine learning faces golden ratio

These “layers” of machine learning are composed of separate steps to comparison and conclusion. Once they are fully trained i.e. transformed in to mathematical formulas the machine is essentially performing these same calculations repeatedly. Of course the process of consolidating layers would normally be conducted by a type of neuromorphic chip. Since machine learning takes a lot of processing power, computer scientists are starting to use advanced microchips that model information in a 360 degree web of interconnected data similar to the human brain

However, researchers from UCLA have discovered that layers of machine learning can be printed on solid plates that re-direct beams of light to process information instead.

Diffraction.png

Similiar to the white and black squares you’re iphone can use to decode data this transparent material is etched with complex patterns that represent steps to a mathematical sequence. Based on both the thickness and shape of each bump on the plate a beam of light can be “aimed” to strike a particular location on the next plate.

Each coordinate on the plate can either transmit or reflect light, resembling a neuron that connects to other coordinates (neurons) by transmitting a beam of light towards a specific location on the next plate. Scientists can tune each of these “neurons” by altering the phase and amplitude of light traveling through.

“Our all-optical deep learning framework can perform, at the speed of light, various complex functions that computer-based neural networks can implement,”

For example, two coordinates on the same layer could represent values in a formula. By redirecting light from those coordinates to a single point on the next layer you could add them together. In reality the calculations in question are more complex but this gives you an idea of how certain mathematical formulas can relate to diffraction patterns.

The system can tell whether or not certain beams of light represent a 1, 2, or 3 with more than 90 percent accuracy.

This could reduce the power required for image recognition down to nearly zero. Presently, machine learning requires a lot of processing power – much more so then analog computing, so the promise of reducing energy requirements to such a radical degree should be enough to justify further research and development.

It turns out the idea of using light signals to create a neuromorphic computer may be more common than researchers think.

In fact The National Institute of Standards and Technology (NIST) has come up with a similar design composed silicon chips that transmit signals of light across a miniature brain-like grid

Their justification is that using light instead of electricity eliminates interference caused from the heavier electrons moving through a conductor, so the signals in question would travel faster and further than their electric counterparts

“Light’s advantages could improve the performance of neural nets for scientific data analysis such as searches for Earth-like planets and quantum information science, and accelerate the development of highly intuitive control systems for autonomous vehicles,” NIST physicist Jeff Chiles said.

“Because photons are uncharged and massless, photons avoid charge-based wiring parasitics”

“A suitable solution is the use of multiple planes of photonic waveguides, a field which has seen significant progress over the last decade.”

Waveguides are composed of crystalline compounds that guide electromagnetic waves in a particular direction. The most common wave-guide is a silicon dioxide optical fiber utilized to transmit communication signals across the ocean.

Figure

Similiar to diffraction plates  – by stacking multiple wave-guide layers on top of each other you create a 3 dimensional design that is capable of complex routing schemes necessary for replicating a neural system. Additional layers can be added on top of each other to further increase informational complexity,

How the UCLA device "sees" a sandal. Image via UCLA Samueli / Ozcan Research Group

In this experimental example two stacked wave-guides create a 3 dimensional grid composed of a complete interlocking exchange between 10 “upstream” inputs (neurons) and 10  “downstream” outputs, for a total of 100 possible connections (10 X 10).

The two waveguides are made of silicon nitride sandwiched inside of an ordinary silicon wafer. Each of them are 800 nanometers wide and 400 nm thick.

In the experiment laser light was transmitted to the chip through an external optical fiber. The purpose of which was to see if light could be successfully transmitted from each input to every output.

By varying power routed to particular locations scientists demonstrated two possible means of prioritizing information transfer: uniformly (i.e. each output receives the same power) or a type of  “bell curve” distribution (whereby neurons in the middle receive most of the power, while peripheral neurons receive less).

“We’ve really done two things here,” Chiles said. “We’ve begun to use the third dimension to enable more optical connectivity, and we’ve developed a new measurement technique to rapidly characterize many devices in a photonic system. Both advances are crucial as we begin to scale up to massive optoelectronic neural systems.”

 

Design, fabrication, and metrology of 10 × 100 multi-planar integrated photonic routing manifolds for neural networks

NIST Chip Lights Up Optical Neural Network Demo

This 3D-printed AI construct analyzes by bending light

 

https://www.nist.gov/news-events/news/2018/07/nist-chip-lights-optical-neural-network-demo

UCLA 3D prints an artificial “brain” that could breed new autonomous vehicles

This 3D-printed AI construct analyzes by bending light

Leave a Reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.

%d bloggers like this: