Fique por dentro das novidades
Inscreva-se em nossa newsletter para receber atualizações sobre novas resoluções, dicas de estudo e informações que vão fazer a diferença na sua preparação!

Text 3
A new "eye" may radically change how robots see
The low-power robotics system LENS merges a brainlike sensor, a chip and an Al model
By Kathryn Hulick
This hexapod robot recognizes its surroundings using a vision system that occupies less storage space than a single photo on your phone. Running the new system uses only 10 percent of the energy required by conventional location systems, researchers report in the June Science Robotics.
Such a low-power 'eye' could be extremely useful for robots involved in space and undersea exploration, as well as for drones or microrobots, such as those that examine the digestive tract, says roboticist Yulia Sandamirskaya of Zurich University of Applied Sciences, who was not involved in the study.
The system, known as LENS, consists of a sensor, a chip and a super-tiny Al model to learn and remember location. Key to the system is the chip and sensor combo, called Speck, a commercially available product from the company SynSense. Speck's visual sensor operates "more like the human eye" and is more efficient than a camera, says study coauthor Adam Hines, a bioroboticist at Queensland University of Technology in Brisbane, Australia.
Cameras capture everything in their visual field many times per second, even if nothing changes. Mainstream Al models excel at turning this huge pile of data into useful information. But the combo of camera and Al guzzles power. Determining location devours up to a third of a mobile robot's battery. "It is, frankly, insane that we got used to using cameras for robots," Sandamirskaya says.
In contrast, the human eye detects primarily changes as we move through an environment. The brain then updates the image of what we're seeing based on those changes. Similarly, each pixel of Speck's eyelike sensor "only wakes up when it detects a change in brightness in the environment," Hines says, so it tends to capture important structures, like edges. The information from the sensor feeds into a computer processor with digital components that act like spiking neurons in the brain, activating only as information arrives - a type of neuromorphic computing.
The sensor and chip work together with an Al model to process environmental data. The Al model developed by Hines' team is fundamentally different from popular ones used for chatbots and the like. It learns to recognize places not from a huge pile of visual data but by analyzing edges and other key visual information coming from the sensor. This combo of a neuromorphic sensor, processor and Al model gives LENS its low-power superpower. "Radically new, power-efficient solutions for... place recognition are needed, like LENS," Sandamirskaya says.
Adapted from: ScienceNews in <https://www.sciencenews.org/article/robot-eye-artificial-intelligence-ai> [Accessed on 14th July 2025].
Which of the following scenarios would least benefit from the use of a neuromorphic vision system like LENS?
Robotic dep-sea research.
Planetary rovers with limited power sources.
Indoor robots with unrestricted access to power and bandwidth.
Swarm microrobots for internal medical diagnostics.
Military drones in operation.
O cenário que menos se beneficiaria do uso de um sistema de visão neuromórfico como o LENS seria o de robôs para ambientes internos com acesso irrestrito a energia e largura de banda, algo que não é mencionado no texto.
Inscreva-se em nossa newsletter para receber atualizações sobre novas resoluções, dicas de estudo e informações que vão fazer a diferença na sua preparação!