黑料不打烊

June 28, 2025

How Watson College is helping to lead the robotics revolution

Thanks to advances in technology and artificial intelligence, we are closer than ever to having robots in our daily lives

PhD students Zainab Altaweel, left, and Yohei Hayamizu work in Associate Professor Shiqi Zhang's lab at Watson College's School of Computing, where they are learning how to control two new humanoid robots. PhD students Zainab Altaweel, left, and Yohei Hayamizu work in Associate Professor Shiqi Zhang's lab at Watson College's School of Computing, where they are learning how to control two new humanoid robots.
PhD students Zainab Altaweel, left, and Yohei Hayamizu work in Associate Professor Shiqi Zhang's lab at Watson College's School of Computing, where they are learning how to control two new humanoid robots. Image Credit: Jonathan Cohen.

Ever since The Jetsons and similar futuristic visions, we鈥檝e imagined working side by side with robots to help with everyday tasks.

Thanks to advances in technology and artificial intelligence, that day is closer than ever 鈥 and Watson College researchers are at the forefront of those innovations. Here are just a few examples of what they鈥檙e working on.

鈥楥obots鈥 for manufacturing

As the manufacturing sector is upgrading to Industry 4.0 鈥 which utilizes advances such as artificial intelligence, smart systems, virtualization, machine learning and the internet of things 鈥 Associate Professor Christopher Greene, MS 鈥98, PhD 鈥01, from the School of Systems Science and Industrial Engineering researches collaborative robotics, or 鈥渃obots,鈥 as part of his larger goal of continual process improvement.

鈥淚n layman鈥檚 terms,鈥 he says, 鈥渋t鈥檚 about trying to make everybody鈥檚 life easier.鈥

Most automated robots on assembly lines are programmed to perform just a few repetitive tasks, with no sensors intended for working side by side with humans. Some functions require pressure pads or light curtains for limited interactivity, but those are added separately.

Through the Watson Institute for Systems Excellence (WISE), Greene has led projects for factories that make electronic modules using surface-mount technology, as well as done research for automated pharmacies that sort and ship medications for patients who fill their prescriptions by mail. He also works on cloud robotics, which allows users to control robots from anywhere in the world.

Human workers are prone to human errors, but robots can perform tasks thousands of times in the exact same way, such as gluing a piece onto a product with the precise amount of pressure required to make it stick firmly without breaking it. They also can be more accurate when it matters most. Humans are required to program and maintain the automated equipment.

鈥淎ssembling pill vials with the right quantities is done in an automated factory,鈥 Greene says. 鈥淐obots are separating the pills, they鈥檙e putting them in bottles, they鈥檙e attaching

labels and putting the caps on them. They鈥檙e putting it into whatever packaging there is, and it鈥檚 going straight to the mail. All these steps have to be correct, or people die. A human being can get distracted, pick up the wrong pill vial or put it in the wrong package. If you correctly program a cobot to pick up that pill bottle, scan it and put it in a package, that cobot will never make a mistake.鈥

The rise in robots鈥 abilities and usefulness, he adds, will lead to shifts in the labor force.

鈥淓verybody鈥檚 always asking, 鈥業s it going to put people out of a job?鈥 I tell my students, 鈥楴ot if you learn how to be the one who programs or maintains the cobot,鈥欌 he says. 鈥淭he cobot is going to break down because, over time, that鈥檚 just what happens to machinery, and it can鈥檛 fix itself.鈥

Helping humans and robots work together

If humans and robots are going to get along well, they need a common language, or they must at least share common ground about problem-solving.

Shiqi Zhang, an associate professor at the School of Computing, studies the intersection of AI and robotics, and he especially wants to ensure that service robots work smoothly alongside humans in collaborative environments.

There鈥檚 just one problem 鈥 and it鈥檚 a big one: 鈥淩obots and humans don鈥檛 work well with each other right now,鈥 he says. 鈥淭hey don鈥檛 trust each other. Humans don鈥檛 know what robots can do, and robots have no idea about the role of humans.鈥

In his research group, Zhang and his students focus on everyday scenarios 鈥 such as homes, hospitals, airports and shopping centers 鈥 with three primary themes: robot decision-making, human/robot interaction and robot task-motion planning. Zhang uses language and graphics to show how the AI makes decisions and why humans should trust those decisions.

鈥淎I鈥檚 robot system is not transparent,鈥 he says. 鈥淲hen the robot is trying to do something, humans have no idea how it makes the decision. Sometimes humans are too optimistic about robots, and sometimes it鈥檚 the other way round 鈥 so one way or the other, it鈥檚 not a good ecosystem for a human/robot team.鈥

One question for software and hardware designers improving AI/human collaborations is how much information needs to be shared to optimize productivity. There should be enough so humans can make informed decisions, but not so much they are overwhelmed with unnecessary information.

Zhang is experimenting with augmented reality (AR), which allows users to perceive the real world overlaid with computer-generated information. Unlike the entirely computer-generated experience of virtual reality (VR), someone on a factory floor stacked with boxes and crates could pull out a tablet or put on a pair of AR-enhanced glasses to learn where the robots are, so accidents can be avoided.

鈥淏ecause these robots are closely working with people, safety becomes a huge issue,鈥 Zhang says. 鈥淗ow do we make sure the robot is close enough to provide services but keeping its distance to follow social norms and be safe? There is no standard way to enable this kind of communication. Humans talk to each other in natural language, and we use gestures and nonverbal cues, but how do we get robots to understand?鈥

鈥楤ugs鈥 on the ocean

Futurists predict that more than 1 trillion autonomous nodes will be integrated into all human activities by 2035 as part of the 鈥渋nternet of things.鈥 Soon, pretty much any object 鈥 big or small 鈥 will feed information to a central database without the need for human involvement.

Making this idea tricky is that 71% of the Earth鈥檚 surface is covered in water, and aquatic environments pose critical environmental and logistical issues. To consider these challenges, the U.S. Defense Advanced Research Projects Agency (DARPA) has started a program called the Ocean of Things.

As part of that initiative, Professor Seokheun 鈥淪ean鈥 Choi, Assistant Professor Anwar Elhadad, PhD 鈥24, and PhD student Yang 鈥淟exi鈥 Gao from the Department of Electrical and Computer Engineering developed a self-powered 鈥渂ug鈥 that can skim across the water, and they hope it will revolutionize aquatic robotics.

Over the past decade, Choi has received research funding from the Office of Naval Research to develop bacteria-powered biobatteries that have a possible 100-year shelf life. The new aquatic robots use similar technology because it is more reliable than solar, kinetic or thermal energy systems under adverse conditions.

鈥淲hen the environment is favorable for the bacteria, they become vegetative cells and generate power,鈥 Choi says, 鈥渂ut when the conditions are not favorable 鈥 for example, it鈥檚 really cold or the nutrients are not available 鈥 they go back to spores. In that way, we can extend the operational life.鈥

The research showed power generation close to 1 milliwatt, which is enough to operate the robot鈥檚 mechanical movement and any sensors that could track environmental data such as water temperature, pollution levels, the movements of commercial vessels and aircraft, and the behaviors of aquatic animals. The next step in refining these aquatic robots is testing which bacteria will be best for producing energy under stressful ocean conditions.

鈥淲e used very common bacterial cells, but we need to study further to know what is actually living in those areas of the ocean,鈥 Choi says.

Diving under the sea

For Assistant Professor Monika Roznere 鈥18, developing robots for underwater environments brings unique challenges. Seeing beneath the surface is different than above it 鈥 GPS and Bluetooth don鈥檛 work and transmitting any signal to exchange information can be tricky.

The good news, she says, is that many avenues to possible solutions are unexplored, offering the potential for truly groundbreaking research.

While a computer science undergraduate at Watson, Roznere worked on computer vision with Distinguished Professor Lijun Yin. At the same time, her older sister was earning a PhD in ecology at a university in Ohio and needed to get scuba certified as part of her research on mussels. Roznere decided that sounded like fun and took a scuba class at Binghamton.

Her career path took a turn when pursuing her PhD at Dartmouth College: A professor asked her if she wanted to join his underwater robotics lab.

鈥淗e said, 鈥楢re you scuba certified? Do you want to dive with robots?鈥 I was like, 鈥榊eah!鈥欌 Roznere says with a laugh. 鈥淭hat鈥檚 when I changed my focus from computer graphics 鈥 I鈥檓 going to swim with robots!鈥

A few tools can help robots get around under the sea. Thanks to war films set on military submarines, most of us know about sonar, which sends out audible 鈥減ings鈥 and listens for echoes to calculate the distance and direction of objects around it.

However, a high-end sonar system can cost up to $30,000, won鈥檛 give a full image of what it detects, doesn鈥檛 render colors and can be noisy because of the mechanics required. Cameras work, of course 鈥 but the deeper underwater a robot goes, the less sunlight reaches down there. Also, because light wavelengths degrade differently when they hit the water, everything looks blue and green.

鈥淢y research is about being creative with the lower-cost options that we have 鈥 a simple camera and a simple sonar,鈥 Roznere says. 鈥淗ow good can we get compared to high-end sensors? Can we create an algorithm that helps the robot figure out something is 5 meters away? What does it look like in my camera view?鈥

Her underwater focus has its perks 鈥 including trips to tropical Barbados to field-test the latest innovations 鈥 and she enjoys working with colleagues from multidisciplinary backgrounds as they try to solve problems from a variety of angles.

鈥淎 researcher once told me that if you are trying to hire a roboticist for a very challenging problem, get a marine roboticist, because they鈥檝e already overcome all these difficult challenges,鈥 Roznere says. 鈥淗ow do you make a car autonomous in a snowy environment where you can鈥檛 see the road and there are snowflakes everywhere? That鈥檚 me! I get sediment floating around and fish flying in front of the cameras because I have lights, and they love lights.鈥

An eye in the sky

Thanks to improved technology and reduced cost, more robots have taken to the skies over the past 20 years 鈥 and drones are more than just a fun hobby.

Assistant Professor Jayson Boubin from the School of Computing uses that bird鈥檚-eye view to find and analyze issues on the ground, including invasive species and landmines. To aid his research, he develops AI software that integrates the latest advances in edge computing.

鈥淲hat is intelligence? What makes a machine smart?鈥 he asks. 鈥淔or me, intelligence is all about perception, understanding and decision-making. The smartest people we know are the ones who can understand their environments, understand the problems they鈥檙e trying to solve and then solve them with incisive decision-making. I try to make my drones do that.鈥

The challenge is accomplishing this level of autonomy given the limited weight, processing power and battery life of drones, also called unmanned aerial vehicles or UAVs. Boubin鈥檚 solution is to determine where tasks fit on the edge/cloud continuum 鈥 in other words, figuring out what data is essential to keep onboard the drone and what can be transmitted elsewhere for processing and storage. Other complicating factors are latency (how long it takes for a signal to transmit to 鈥渂ase鈥 and return with further instructions) and what to do in rural areas without adequate cell coverage.

鈥淭hat鈥檚 the thesis of my research area 鈥 making UAVs as smart as possible within those very real and complicated constraints,鈥 he says.

With funding from the Air Force Research Laboratory, Boubin explores two main areas where drones can make a difference. One is ecology and agriculture, with UAVs providing overall views of forests or farmers鈥 fields. By looking for anomalies, drones can spot insidious insect pests such as the spotted lanternfly (which harms trees and vineyards), the brown marmorated stink bug (which attacks various fruits and vegetables) and the Japanese beetle (which strips leaves on soybean plants).

He also focuses on locating landmines and other unexploded ordnance in former war zones, which also could find the remains of combatants who are missing in action. Later this year, he hopes to conduct experiments with replica (nonexplosive) landmines at one of the University鈥檚 soccer fields.

Projects like these are more appealing to him than programming drones for leisure activities or warfare: 鈥淚 like to have drones solve problems that I think have significance and that fulfill me when I attempt to solve them.鈥

But do we really need that robot?

SSIE Assistant Professor Stephanie Tulk Jesso researches the interactions between humans and technology as well as more general ideas of human-centered design 鈥 in short, asking people what they want from a product, rather than just forcing them to use something unsuitable for the task.

As an example, she points to one of her past projects to mitigate fall risks in hospital settings. Could robots take patients to the bathroom and free up time for human staff? Research showed that nurses 鈥 who are primarily the ones who escort patients to the bathroom 鈥 didn鈥檛 want that because of the nuanced human needs during that particular care.

鈥淎 roboticist wants to build a robot. An AI scientist wants to build AI,鈥 she says. 鈥淚 can say, 鈥極h, that鈥檚 a bad idea 鈥 let鈥檚 not.鈥欌

One big problem: We project human qualities onto robots that look like us, even though they perceive and evaluate the world very differently. Then when they do something we don鈥檛 predict or fail to meet our expectations, we are disappointed 鈥 or, worse, companies continue to use them despite their inadequacies because of the money already spent to purchase them.

Another thing holding back robotics, Tulk Jesso believes, is Moravec鈥檚 paradox, an AI theory proposed in the 1980s by computer scientist Hans Moravec. He observed that mimicking higher-level thinking 鈥 playing chess, doing math or writing essays 鈥 is inherently easier for AI than innate skills like perception and motor functions that have been finely honed through evolutionary natural selection over millions of years.

In everyday environments outside of a highly controlled lab, 鈥渁 3-year-old child can navigate, pick up objects and arrange things on a table much better than the most sophisticated robot in the world right now,鈥 she says.

Circling back to healthcare, Tulk Jesso thinks there are tasks that robots could do successfully. They could fetch items for their human coworkers or help patients in isolation rooms, allowing nurses to focus on patient care without spending extra time and energy to put on personal protective equipment every time someone has a low-level needs such as an extra pillow or blanket.

鈥淚 do think there are opportunities where robots really could do something helpful,鈥 she says. 鈥淏ut we as a society need to temper all of this unchecked enthusiasm and methodically evaluate what this technology is even capable of. Otherwise, in five years we鈥檙e not going to use robots 鈥 or when we see them, everybody鈥檚 going to groan and be like, 鈥楪reat, there鈥檚 another stupid robot.鈥欌