DARPA’s Rescue-Robot Showdown - IEEE Spectrum

2022-09-12 01:19:05 By : Ms. Jackie Guo

IEEE websites place cookies on your device to give you the best user experience. By using our websites, you agree to the placement of these cookies. To learn more, read our Privacy Policy.

“Where are the robots?” That was what many people were asking when events at Japan’s Fukushima nuclear power plant spiraled out of control in March 2011. With deadly levels of radiation collecting inside the damaged reactors, attempting to repair them became too dangerous for an emergency crew. Could Japan, a country known for its automated factories and advanced humanoids, use robots to take the place of human workers and stop the disaster in its tracks?

The answer, alas, was no: Robots are generally still too limited in what they can do. They may be great for carrying out repetitive tasks in clutter-free environments, but entering a rubble-strewn building, climbing ladders, using fire hoses—these operations are beyond today’s best robots.

Robot Rescuers: CHIMP (top) and Atlas are two robots competing in DARPA’s simulated disaster response.Photos, from top: National Robotics Engineering Center/Carnegie Mellon University; DARPA

The U.S. Defense Advanced Research Projects Agency (DARPA) wants to change that. Fukushima was a wake-up call for the robotics community around the world, and DARPA responded by launching its biggest and most ambitious robot R&D program yet. Called the DARPA Robotics Challenge, or DRC, it aims to accelerate the development of robots that can help humans, not only with nuclear emergencies but also with fires, floods, earthquakes, chemical spills, and other kinds of natural and man-made disasters.

DARPA (some call it the mad science division of the Pentagon) organized the DRC as a kind of Olympic decathlon for robots, open to teams from anywhere on the globe. But instead of running, jumping, and throwing things, the robots will score points by performing various tasks in a simulated industrial disaster. Picture hulking machines driving vehicles, using power tools, and breaking through walls. And instead of a gold medal, the winning team will take home a US $2 million cash prize.

This month, DARPA held a preliminary contest, the DRC Trials; the finals are scheduled for late 2014. These robo-spectacles are certain to draw the attention of the tech world and the general public, raising the stakes for the DRC program. If it’s a success, it will spawn a new generation of practical robots—and, perhaps inevitably, endless jokes about a robot uprising.

Robots are not new to disaster response. In the 1980s, Carnegie Mellon University engineers built robots that entered and made repairs inside the damaged reactor at the Three Mile Island nuclear facility, in the United States, and at Chernobyl, in the former Soviet Union. One of the first reported uses of robots in a search-and-rescue operation was in 2001 at the World Trade Center, in New York City, after the 9/11 attacks.

Robots must drive a utility vehicle across a 76-meter obstacle-laden track.

Next, robots need to traverse a rough-terrain course covered with tripping hazards.

This task requires robots to clear lumber and pipes that are blocking an entryway.

Robots face a push door, a pull door, and a door with a self-closing hinge.

Robots must climb a narrow industrial-type ladder with a 60-degree incline.

Using power tools‚—and their own hands—robots have to break panels of drywall.

Robots need to locate and close different kinds of valves on a wall and a table.

Finally, robots must pick up a fire hose, unspool it, and connect it to a fire hydrant.

Emergency workers all over the world have since been using small, remotely controlled vehicles equipped with cameras and sensors to locate victims and to map disaster sites. Most of these machines have tracks and look like tiny tanks. Some models have manipulators, but these are not strong or dexterous. In the aftermath of the Fukushima accident, some tracked robots were sent into the reactors, helping to assess the damage and perform cleanup tasks. The machines proved useful, but DARPA believes that disaster robots could do much more.

Organizers of the DRC hope to advance many aspects of robotics, including locomotion, manipulation, perception, and navigation. In the end, DARPA wants robots that can get around as easily as human rescue workers do. Mechanical first responders should also be able to use vehicles and tools designed for people. The DRC requires that the machines be able to exercise a great deal of autonomy, performing tasks with minimum supervision.

That’s a ridiculously ambitious goal. To put things into perspective, consider today’s most advanced robots. Some, like Honda’s Asimo and South Korea’s Hubo, can walk and even run. Others, including NASA’s Robonaut and Germany’s Rollin’ Justin, can grasp and use tools. And PR2, developed by the Silicon Valley lab Willow Garage, can map its environment, drive around, and handle objects. A host of tracked vehicles, like iRobot’s PackBot, used to disarm bombs, can negotiate rough terrain and perform manipulations. But a robot that can do all that while operating in a deteriorated environment with limited access to communication and power, as DARPA is aiming for, is unheard of. That goal could easily take a decade or two, but the sponsors of the DRC want to show significant progress in just two years.

As Dennis Hong, a roboticist at Virginia Tech, put it, the pace has been “insane.” His team is building a humanoid called THOR (Tactical Hazardous Operations Robot), powered by linear actuators that Hong and his colleagues are engineering from scratch.

Although THOR and other DRC contenders have humanoid forms, teams are free to design any kind of machine they want. So it’s possible that at the finals we may see robots with distinctly nonhuman shapes or features that enhance their capabilities. The Carnegie Mellon team, for example, is building CHIMP (CMU Highly Intelligent Mobile Platform), which looks like a cross between an ape and a tank. Its limbs have tracks at their extremities, to assist with locomotion over uneven terrain.

DARPA tried to make the program as open as possible, and more than 100 teams registered. In the end, only a dozen or so are expected to make it to the finals. The agency provided some of the teams with financial support. Others are self-funded.

To make the DRC accessible to groups that couldn’t afford to build their own robots, DARPA organized a virtual competition, which took place in June. Teams not building hardware had to use a simulator, developed by the Open Source Robotics Foundation, to program a virtual humanoid to perform some of the tasks that will be judged in the real contest.

But here’s the best part: The top performers in the virtual competition were each allowed to borrow a $2 million robot from DARPA for the upcoming hardware face-off. The robot, called Atlas, was built by Boston Dynamics, most famous for its BigDog quadruped. Powered by hydraulic actuators, Atlas is nearly 2 meters tall and weighs 150 kilograms—as much as a large refrigerator. In a video demo, a 9-kg wrecking ball hanging by a strap slams into Atlas as it stands on one foot; the robot quickly adjusts its balance.

For teams that receive an Atlas loaner, a big challenge is transferring what they accomplished in simulation to the real robot, where every move must be carefully executed. Make an error in the control algorithm and your Atlas might come crashing down face first. One team reported that a buggy line of code nearly sent one of Atlas’s feet into its own chest. Another said that while they were teaching Atlas to walk over concrete blocks, the robot ended up kicking the blocks with enough force to destroy them.

In simulation, you can always try new things and start it all over when it doesn’t work, says Michael Gennert, one of the leaders of the Worcester Polytechnic Institute team. With a $2 million robot, not so much. “You can’t just reboot it after a crash,” he says.

DARPA specifically mentions the Fukushima accident as an example of a disaster that would have benefited from more capable robots. Indeed, the scenario DARPA is planning for the final competition closely resembles the dramatic events that unfolded in the first 24 hours of the Fukushima catastrophe, when workers attempted but ultimately failed to fix one of the crippled reactors.

DRC program manager Gill Pratt rejects the notion that the tasks DARPA has concocted for the robots might be too difficult given the current state of the art. In the agency’s parlance, the tasks are “DARPA hard,” he says, but not impossible. “It’s a goal that has a lot of risk but a lot of reward as well, and that’s really the theme of what DARPA tries to do.”

Most teams performed reasonably well during the preliminary contest. And they’re hopeful that next year their robots will do even better. So you might see a repeat of what happened with the DARPA challenges for self-driving vehicles. As a result of these challenges, robotic cars saw a huge advance. In just a few years, they went from erratic prototypes to reliable machines that were able to drive themselves admirably, first through the desert and later around a mock city. Some of their designers went on to work for Google, developing its now-famous self-driving cars.

The same could happen with the DRC. And even if the challenge fails to foster the creation of practical disaster robots in the near future, it will certainly show their possibilities and propel many technologies forward.

“The actual competition robots are prototypes and not yet ready for deployment,” says Seth Teller of the MIT team. “But the DRC is a first and important step toward a future in which, even as disasters like Fukushima unfold, people will be able to send machines to do their bidding.”

This article originally appeared in print as “Rescue-Robot Showdown.”

Extraordinarily thin sheets in ferroelectric crystals may lead to flexible, adaptable electronics

Charles Q. Choi is a science reporter who contributes regularly to IEEE Spectrum. He has written for Scientific American, The New York Times, Wired, and Science, among others.

(Top left) Piezoresponse force microscopy images of ferroelectric domains in lithium niobate. (Bottom left) Conducting atomic force microscopy images of ferroelectric domains in lithium niobate. (Top right) Piezoresponse force microscopy images of a lithium niobate thin film. (Bottom right) Cross-sectional high-angle annular dark-field scanning transmission electron microscopy image of ferroelectric domains in lithium niobate.

Atomically thin materials such as graphene have drawn attention for how electrons can race at exceptionally quick speeds in them, leading to visions of advanced new electronics. Now scientists find that similar behavior can exist within two-dimensional sheets known as domain walls that are embedded within unusual crystalline materials. Moreover, unlike other atomically thin sheets, domain walls can easily be created, moved and destroyed, which may lead the way for novel circuits that can instantly transform or get repaired on command.

In the new study, researchers investigated crystalline lithium niobate ferroelectric film just 500 nanometers thick. Electric charges within materials separate into positive and negative poles, and ferroelectrics are materials in which these electric dipoles are generally oriented in the same direction.

The electric dipoles in ferroelectrics are clustered in regions known as domains. These are separated by two-dimensional layers known as domain walls.

The amazing electronic properties of two-dimensional materials such as graphene and molybdenum disulfide have led researchers to hope they may allow Moore's Law to continue once it becomes impossible to make further progress using silicon. Researchers have also investigated similarly attractive behavior in exceptionally thin electrically conducting heterointerfaces between two different insulating materials, such as lanthanum aluminate and strontium titanate.

Domain walls are essentially homointerfaces between chemically identical regions of the same material. However, unlike any other 2-D electronic material, applied electric or magnetic fields can readily create, move and annihilate domain walls inside materials.

This unique quality of domain walls may potentially lead to novel "domain wall electronics" far more flexible and adaptable than current devices that rely on static components. One might imagine entire circuits "created in one instant, for one purpose, only to be wiped clean and rewritten in a different form, for a different purpose, in the next instant," says study lead author Conor McCluskey, a physicist at Queen's University Belfast in the United Kingdom. "Malleable domain wall network architecture that can continually metamorphose could represent a kind of technological genie, granting wishes on demand for radical moment-to-moment changes in electronic function."

However, scientists have found it difficult to examine domain walls in detail. The fact that domain walls are both very thin and buried under the surfaces of crystals makes them less easy to analyze "than regular 3D or even 2D materials," McCluskey says.

In the new study, McCluskey and his colleagues focused on how the domain walls in the crystals they were investigating are shaped like cones. This geometry let them analyze the behavior of the domain walls using a relatively simple probe design.

"Malleable domain wall network architecture that can continually metamorphose could represent a kind of technological genie, granting wishes on demand for radical moment-to-moment changes in electronic function."—Conor McCluskey

The scientists found that electric charge mobility was exceptionally fast at room temperature on average. These speeds may be "the highest room-temperature value in any oxide" and "at least comparable to that seen in graphene," McCluskey says. They detailed their findings in the 11 August issue of the journal Advanced Materials.

Precise knowledge of such parameters "are needed for envisioning and building devices that work reliably," McCluskey says. "The dream is that it could allow completely malleable or ephemeral nanocircuitry to be created, destroyed and reformed from one moment to the next."

One promising application for domain walls may be brain-mimicking neuromorphic computing, with neuromorphic devices playing the role of the synapses that link neurons together, McCluskey says.

"The brain works by forging pathways which have some memory about their history: if a particular synaptic pathway is used more frequently, it becomes stronger, making it easier for this pathway to be used in the future. The brain learns by forging these stronger pathways," McCluskey says. "Some domain wall systems can behave in the same way: if you apply a small voltage to walls in our particular system, they tilt and change slightly, increasing their conductivity and giving a higher current. The next pulse will produce a higher current, and so on and so on, as if they have some memory of their past."

If domain walls can play the role of artificial synapses, "this could pave the way to a low-heat-production, low-power-consumption brain-like architecture for neuromorphic computing," he adds.

However, although reconfigurable electronics based on domain walls are a tantalizing idea, McCluskey notes that in many ferroelectrics, the domain walls conduct only marginally better than the rest of the material, and so they will likely not help support viable devices.

"This isn't a problem for the system we have investigated, lithium niobate, as it has quite an astonishing ratio between the conductivity of the domain walls and the bulk material," McCluskey says. However, lithium niobate does currently require large voltages to manipulate domain walls. Scaling these systems down in thickness for use with everyday voltages "is one major hurdle," he notes. "We are working on it."

Future experiments will explore why electric charge mobility is so fast in domain walls. "Broadly speaking, the carrier mobility relies on two things—the number of times the charge carrier will scatter or bump into something on its journey through the material, and the so-called 'effective mass' with which the carrier moves," McCluskey says.

Electrons can deflect off defects in materials, as well as vibrations known as phonons. "It is possible the presence of a domain wall alters the defect or phonon concentrations locally, resulting in fewer scattering centers along the domain wall," McCluskey says.

When it comes to the effective mass of a charge carrier such as an electron, "when we consider an electron moving through a crystal lattice, we need to consider it not as a free electron, such as one in vacuum, but as an electron moving through the solid crystalline environment," McCluskey explains. "The electron feels the effect of the nearby atoms as it progresses, changing its energy as it moves closer or further away from any given atom." This can essentially make an electron moving in a crystal lighter or heavier than a normal electron. The way in which domain walls disturb crystal lattices may in turn alter its effective mass, he says.

"Without further experiment, it's impossible to say which of these contributions is more responsible for determining the carrier mobility in our system," McCluskey says. ""We hope that our study prompts a shift in focus towards characterizing the transport in domain wall systems, which may be every bit as exciting as some of the other 2D functional materials systems at the forefront of research today."

Scientist Scott Acton on optimizing the wavefront sensing and control of the James Webb Space Telescope

Ned Potter, a writer from New York, spent more than 25 years as an ABC News and CBS News correspondent covering science, technology, space, and the environment.

The James Webb Space Telescope, in just a few months of operation, has begun to change our view of the universe. Its images—more detailed than what was possible before—show space aglow with galaxies, some of them formed very soon after the big bang.

None of this would be possible without the work of a team led by Scott Acton, the lead wavefront sensing and control scientist for the Webb at Ball Aerospace & Technologies in Colorado. He and his colleagues developedthe systems that align the 18 separate segments of the Webb’s primary mirror with its smaller secondary mirror and science instruments. To produce clear images in the infrared wavelengths the telescope uses, the segments have to be within tens of nanometers of the shape specified in the spacecraft design.

Acton grew up in Wyoming and spent more than 20 years on the Webb team. IEEE Spectrum spoke with Acton after his team had finished aligning the telescope’s optics in space. This transcript has been edited for clarity and brevity.

Tell your story. What got you started?

Scott Acton: When I was seven-years-old, my dad brought home a new television. And he gave me the old television to take apart. I was just enthralled by what I saw inside this television. And from that moment on I was defined by electronics. You look inside an old television and there are mechanisms, there are smells and colors and sights and for a seven-year-old kid, it was just the most amazing thing I’d ever seen.

Fast-forward 25 years and I’m working in the field of adaptive optics. And eventually that led to wavefront sensing and controls, which led to the Webb telescope.

Called the Cosmic Cliffs, Webb’s seemingly three-dimensional picture looks like craggy mountains on a moonlit evening. In reality, it is the edge of the giant, gaseous cavity within NGC 3324, and the tallest “peaks” in this image are about 7 light-years high. NASA/ESA/CSA/STScI

Talk about your work getting the telescope ready for flight. You worked on it for more than 20 years.

Acton: Well, we had to invent all of the wavefront sensing and controls. None of that technology really existed in 2001, so we started from the ground up with concepts and simple experiments. Then more complicated, very complicated experiments and eventually something known as TRL 6 technology—Technology Readiness Level 6—which demonstrated that we could do this in a flightlike environment. And then it was a question of taking this technology, algorithms, understanding it and implementing it into very robust procedures, documentation, and software, so that it could then be applied on the flight telescope.

What was it like finally to launch?

Acton: Well, I’ve got to say, there was a lot of nervousness, at least on my part. I was thinking we had a 70 percent chance of mission success, or something like that. It’s like sending your kid off to college—this instrument that we’d been looking at and thinking about.

The Ariane 5 vehicle is so reliable. I didn’t think there was going to be any problem with it, but deployment starts, basically, minutes after launch. So, for me, the place to be was at a computer console [at the Space Telescope Science Institute in Baltimore].

And then there were a lot of things that had to work.

Acton: Yes, right. But there are some things that that are interesting. They have these things called nonexplosive actuators [used to secure the spacecraft during launch]. There are about 130 of them. And you actually can’t test them. You build them and they get used, basically, once. If you do reuse one, well, it’s now a different actuator because you have to solder it back together. So you can’t qualify the part, but what you can do is qualify the process.

We could have still had a mission if some didn’t fire, but most of them were absolutely necessary for the success of the mission. So just ask yourself, let’s suppose you want to have a 95 percent chance of success. What number raised to the 130th power is equal to 0.95? That number is basically one. These things had to be perfect.

I remember walking home one night, talking on the phone to my wife, Heidi, and saying, “If I’m wrong about this I’ve just completely screwed up the telescope.” She said, “Scott, that’s why you’re there.” That was her way of telling me to cowboy up. The responsibility had to come down to somebody and in that moment, it was me.

I think the public perception was that the Webb was in very good shape and the in-flight setup all went very well. Would you say that’s accurate?

Acton: Early on in the mission there were hiccups, but other than that, I’d say things just went beyond our wildest expectations. Part of that comes down to the fact that my team and I had commissioned the telescope 100 times in simulations. And we always made it a little harder. I think that served us well because when we got to the real telescope, it was quite robust. It just worked.

Take us through the process of aligning the telescope.

Acton: The first image we got back from the telescope was 2 February, in the middle of the night. Most people had gone home, but I was there, and a lot of other people were too. We just pointed the telescope at the Large Magellanic Cloud, which has lots and lots of stars in it, and took images on the near-infrared cameras. People were really happy to see these images because they were looking basically to make sure that the science instruments worked.

But some of us were really concerned with that image, because you could see some very significant astigmatism—stronger than we were expecting to see from our simulations. Later we would learn that the telescope’s secondary mirror was off in translation—about 1.5 millimeters along the deployment axis and about a millimeter in the other axis. And the primary mirror segments were clocked a bit from the perfectly aligned state.

Lee Feinberg, the telescope lead at NASA Goddard, texted me and said, “Scott, why can’t you just simulate this to see if you can get some images that bad?” So that morning I ran a simulation and was able to reproduce almost exactly what we were seeing in these images. We realized that we were not going to have any major problems with the wavefront.

Describe the cadence of your work during commissioning. What would a day be like?

Acton: One of the rules we set up very early on was that in terms of wavefront sensing and control, we would always have two people sitting in front of the computers at any given time. Anytime anything significant happened, I always wanted to make sure that I was there, so I got an apartment [near the institute in Baltimore]. From my door to the door of the of the Mission Operations Center was a 7-minute walk.

In this mosaic image stretching 340 light-years across, Webb’s Near-Infrared Camera (NIRCam) displays the Tarantula Nebula star-forming region in a new light, including tens of thousands of never-before-seen young stars that were previously shrouded in cosmic dust.NASA/ESA/CSA/STScI/Webb ERO Production Team

There were certainly times during the process where it had a very large pucker factor, if you will. We couldn’t point the telescope reliably at the very beginning. And a lot of our software, for the early steps of commissioning, depended on the immutability of telescope pointing. We wanted to have the telescope repeatedly pointed to within a couple of arc-seconds and it was closer to 20 or 30. Because of that, some of the initial moves to align the telescope had to be calculated, if you will, by hand.

I remember walking home one night, talking on the phone to my wife, Heidi, and saying, “If I’m wrong about this I’ve just completely screwed up the telescope.” She said, “Scott, that’s why you’re there.” That was her way of telling me to cowboy up. The responsibility had to come down to somebody and in that moment, it was me.

But when the result came back, we could see the images. We pointed the telescope at a bright isolated star and then we could see, one at a time, 18 spots appearing in the middle of our main science detector. I remember a colleague saying, “I now believe we’re going to completely align the telescope.” He felt in his mind that if we could get past that step, that everything else was downhill.

You’re trying to piece together the universe. It’s hard to get it right, and very easy to make mistakes. But we did it.

Building the Webb was, of course, a big, complicated project. Do you think there are any particular lessons to be drawn from it that people in the future might find useful?

Acton: Here are a couple of really big ones that apply to wavefront sensing and control. One is that there are multiple institutions involved—Northrop Grumman, Ball Aerospace, the Goddard Space Flight Center, the Space Telescope Science Institute—and the complication of having all these institutional lines. It could have been very, very difficult to navigate. So very early on we decided not to have any lines. We were a completely badgeless team. Anybody could talk to anybody. If someone said, “No, I think this is wrong, you should do it this way,” even if they didn’t necessarily have contractual responsibility, everybody listened.

Another big lesson we learned was about the importance of the interplay between experimentation and simulation. We built a one-sixth scale model, a fully functional optical model of the telescope, and it’s still working. It allowed us, very early on, to know what was going to be difficult. Then we could address those issues in simulation. That understanding, the interplay between experimentation and modeling and simulations, was absolutely essential.

Recognizing of course, that it’s very early, do you yet have a favorite image?

Acton: My favorite image, so far, was one that was taken during the last real wavefront activity that we did as part of commissioning. It was called a thermal slew test. The telescope has a large sunshield, but the sunshield can be at different angles with respect to the sun. So to make sure it was stable, we aimed it at a bright star we used as a guide star, put it in one orientation, and stayed there for five or six days. And then we switched to a different orientation for five or six days. It turned out to be quite stable. But how do you know that the telescope wasn’t rolling about the guide star? To check this, we took a series of test images with the redundant fine-guidance sensor. As you can imagine, when you have a 6-1/2 meter telescope at L2 away from any competing light sources that is cooled to 50 kelvins, yes, it is sensitive. Even just one 20-minute exposure is going to just have unbelievable detail regarding the deep universe. Imagine what happens if you take 100 of those images and average them together. We came up with an image of just some random part of the sky.

Scott Acton’s favorite Webb image: A test image of a random part of the sky, shot with the Webb’s fine-guidance sensor. The points with six-pointed diffraction patterns are stars; all other points are galaxies. NASA/CSA/FGS

I sent this image to James Larkin at UCLA, and he looked at it and estimated that that single image had 15,000 galaxies in it. Every one of those galaxies probably has between 100 [billion] and 200 billion stars.

I don’t talk about religion too much when it comes to this, but I must have had in my mind a Biblical reference to the stars singing. I pictured all of those galaxies as singing, as if this was a way for the universe to express joy that after all these years, we could finally see them. It was quite an emotional experience for me and for many people.

You realized that there was so much out there, and you weren’t even really looking for it yet? You were still phasing the telescope?

Acton: That’s right. I guess I I’m not sure what I expected. I figured you’d just see dark sky. Well, there is no dark sky. Dark sky is a myth. Galaxies are everywhere.

Finally, we got to our first diffraction-limited image [with the telescope calibrated for science observations for the first time]. And that’s the way the telescope is operating now.

Several days later, about 70 of us got together—astronomers, engineers, and other team members. A member of the team—his name is Anthony Galyer—and I had gone halves several years earlier and purchased a bottle of cognac from 1906, the year that James Webb was born. We toasted James Webb and the telescope that bears his name.

Enhance your development efficiency with myBuddy, the most cost-effective dual-arm collaborative robot

This is a sponsored article brought to you by Elephant Robotics.

In July 2022, Elephant Robotics released myBuddy—a dual-arm, 13-axis humanoid collaborative robot powered by Raspberry Pi with multiple functions—at an incredible price. It works with multiple accessories such as suction pumps, grippers, and more. Additionally, users can boost their secondary development with the artificial intelligence and myAGV kits and detailed tutorials published by Elephant Robotics. myBuddy helps users achieve more applications and developments as a collaborative robotic arm.

Elephant Robotics has been committed to R&D, manufacturing, and producing collaborative robots, such as myCobot, mechArm, myPalletizer, and myAGV. To meet the expectations of users from more than 50 countries worldwide and allow everyone to enjoy the world of robotics, Elephant Robotics is achieving more breakthroughs in product R&D ability and manufacturing capacity.

In 2020, the team of Elephant Robotics found that the need for robotics applications was increasing, so they decided to produce a robot with multiple functions that could meet more requirements. In the development and production process, the team met many difficulties. At least three auxiliary control chips were needed to develop more functions, increasing the production difficulty by more than 300 percent compared to myCobot, a 6-axis collaborative robot (cobot). The biggest problem was how to make a robot with multiple functions at an affordable and reasonable price.

After more than two years of continuous efforts, Elephant Robotics has upgraded the myCobot series and transferred it to the new myBuddy cobot based on its highly integrated product design and self-developed robot control platform. The product design of myBuddy is based on the myCobot series combined rounded corners, and the overall industrial design style is simple and beautiful. A robot at an affordable price makes the development of dual-arm cobot applications no longer a problem.

Get to know what applications myBuddy can achieve through the features and functional analysis.

The working radius of a single arm of myBuddy is 280 millimeters, and the maximum payload is 250 grams. It is light and flexible, with 13 degrees of freedom. The built-in axis in the torso of myBuddy improves the working range by more than 400 percent compared to myCobot's single robotic arm, so it can perform more complicated tasks such as flag waving, kinematics practice, and AI recognition.

There are more than 100 API interfaces that can be used, and the bottom control interfaces of myBuddy are open. The potential value, angles, coordinates, running speeds, and other interfaces can be controlled freely, so users can master the application research of dual-arm robots, motion path planning, development of action, and visual recognition. On the hardware interface, myBuddy provides a variety of input and output interfaces, including HDMI, USB, Grove, 3.3V IO, LEGO, RJ45 interface, and more.

In the software, myBuddy supports multiple programming environments. myBlockly, a visual tool with multiple built-in robot application cases for graphical programming, simple and easy for users to use and develop their projects. Users can also control myBuddy in Python and set the joint angle and robot coordinates, and get the speed position in real-time (response time up to 20 milliseconds). Moreover, myBuddy supports the simulation development environment ROS. With the built-in ROS environment, users can realize robot motion path planning algorithm research, dual-arm interference avoidance algorithm research, robot vision learning, and other artificial intelligence application development.

myBuddy has a 7-inch interactive display screen, two 2-million-pixel HD cameras, and more than 20 built-in dynamic facial expressions. Users can conduct scientific research in human-robot interaction, robot vision, robotics learning, artificial intelligence, action planning, mechatronics, manufacturing, and automation with myBuddy. The built-in cameras support area location positioning, object, and QR code recognition. myBuddy can achieve face and body recognition, motion simulation, and trajectory tracking with the cameras.

With fast, high-tech development, VR technology is beginning to become an area of independent research and development, so Elephant Robotics decided to build a VR wireless control function into myBuddy. In this function, users can not only experience human-robot interaction and carry out some dangerous scientific experiments, they can also explore more principles and basic applications of wireless control in cobots, such as underwater exploration, remotely-piloted vehicles, and space exploration. In the future, myBuddy can be used as a surgeon in the support of a virtual surgical system.

Elephant Robotics has developed more than 20 robotic arm accessories, including an end-effector, base, camera, mobile phone gripper, and more. myBuddy has more flexibility, maneuverability, and load capacity than myCobot's single robotic arm. The ability to grasp and move objects has been effectively improved in both rigid and flexible objects and effectively avoids any collisions between the two arms when working. With these accessories, myBuddy can perform more applications in science and education. For example, after installing a gripper and a suction pump, myBuddy can grab test tubes and pour liquids.

A dual-arm robot at an affordable price is a preferred choice for many individual developers, especially teachers and students in robotics and engineering. myBuddy, with its multiple functions supported, will help people explore and develop more possibilities in the world of robotics.

myBuddy 280-Pi | The most compact collaborative Dual-arm robot in the world