Thursday, April 14, 2011

Google Robotic Cars


Almost everyone has heard the company Google associated with the internet, but did you also know they are now working on a robotic car the can drive itself? In 2010, Google’s engineers began testing its robot cars on actual roads in California. Google is using seven typical cars, six Toyota Prii and an Audi TT, and installing an “artificial-intelligence software that can sense anything near the car and mimic the decisions made by a human driver” (new York times). This artificial-intelligence system on the robot cars consists of “a rotating sensor on its roof, which can scan more than 200 feet in all directions to create a 3D map of the car's environs; a video camera mounted behind the windshield, which helps the navigation system spot pedestrians, bicyclists, and traffic lights; three radar devices on the front bumper, and one in the back; and a sensor on one of the wheels that allows the system to determine the car's position on the 3D map” (cnet). The cars use a GPS navigation system to know where to go, and always drive the speed limit because the speed limit of every road is programmed into the navigation system. The cars are still in the beginning stages and therefore test cars still have a person behind the wheel to take over if needed and a technician in the passengers seat who is monitoring the software system. As of October 2010, Google’s seven test cars together had driven 1,000 miles with no human intervention and 140,000 miles with limited human control. Some of the impressive driving tasks the robot cars have completed include merging onto a busy freeway attaining speeds around 60 mph, and then exiting the freeway a few exits down. So far, the only accident a robot car has been involved in was when a car was “rear-ended while stopped at a traffic light” (new York times).
            Though these cars are nowhere near ready to be mass produced and available to the public, there is a dream that one day, hopefully within a few decades, that the cars will be. If these cars can be introduced to the public, there would be many advantages. The main goal is to increase driving safety, which should be possible with the robot cars because they can react quicker than a human could. Other advantages to robot cars are that “they have 360-degree perception, and do not get distracted, sleepy, or intoxicated” (new York times).  Theoretically this could mean having more cars on the road because with faster reaction times cars could drive closer together. Moreover, since the cars should greatly reduce the number of accidents, cars could also be built lighter and more fuel-efficient.
            The person behind the robot car testing is Sebastian Thrun, the 43-year-old director of the Stanford Artificial Intelligence Laboratory, a Google engineer and the co-inventor of the Street View mapping service (NY times). Currently there are 15 engineers involved in the project as well as more than a dozen people Google hired to act as “drivers”. Google stated that there isn’t a plan to create a business from the robot cars yet, but instead right now is searching for a way to increase highway safety and decrease the nation’s energy cost. At this point, Google’s engineers willingly admit there are still many unsolved problems remaining. For example, engineers are unsure how to have the artificial-intelligence system account for and react to the signals a traffic cop or crossing guard might give. 

Submersible Robots

If one ever dreamt of exploring the deepest darkest regions of our planet, one must have been thinking of our planets lakes and oceans. Something captivates the imagination of us as children when we try to picture what lies beneath those cold, blue waves. As we all know children grow up (almost too soon) and with some lucky few becoming scientists, our dreams may linger but our imaginations never cease to drive our deepest ambitions. As Shakespeare wrote, “We are such stuff as dreams are made on; and our little life is rounded with a sleep.”


So how can our dreams take us into the deep, dark void of Earth’s watery abode? Well, we begin with the high school physics classroom. In the picture shown above, we have a modern underwater robot created from resources which are available to every person who has $150 dollars to spare. In fact, the most expensive components are the CCTV and camera.

The high school student only needs to understand some key concepts with respect to underwater robotics. These concepts may include neutral buoyancy, basic circuitry, three dimensional maneuverability, etc. The best part is that every single one of these concepts can be related to a teaching standard at the state and federal levels and can be taught using any number teaching strategies and learning styles.Although this model is made for operating in a pool, it’s important to understand that it only takes a little knowledge and insight to nudge a student in the right direction towards wanting to build something a little more complicated.


So why the recent popularity in submersible robotics? What kind of technology was needed to make robotics useful in lieu of manned underwater craft? For starters, sending people down in submersibles is both a complicated and risky business. There are several environmental factors which need to be taken into account such as pressure and temperature. Most of the complexities stem from trying to keep passengers alive within these environments which range in values of pressure in the 10’s of tons per square inch!

So what if one were able to explore the bottom of the ocean from the comfort of an easy chair? In the future, engineers dreamt of a technology which would enable a user to remotely connect to a submersible and explore these dangerous environments without incurring any of the risks on themselves. Well, the future is now and the latest research and developed networking technologies have been applied to submersible robots.

Leading the way, government agencies such as the National Oceanic and Atmospheric Administration (NOAA) as well as energy corporations such as BP and Exxon-Mobil have integrated the technologies into their operations as soon as they were rendered plausible by their engineers and scientists. NOAA uses submersible robots to explore areas of the hydrosphere which are not worth the human risk of exploration while BP and Exxon-Mobil utilize submersibles to build and maintain pipelines and well-heads.


During the most recent oil catastrophe in the Gulf of Mexico, the Deep Horizon Spill, remotely controlled submersibles were used to repair the broken well-head which caused the spill because they were able to “withstand the 5000 lbs of pressure, lift a ton, and provide hydraulic power to other pieces of equipment.” The key component which made this technology useful was its ability to transmit back 3D video images to the repair team working safely out of harms way at the surface. These submersible robots are sometimes referred to submersible remotely-operated-vehicles (ROV’s).

Some corporations are starting to research the use of submersible robots for laying oil pipelines and optics cables. For example, the SMD Ultra Trencher 1, a $20 million robot which weighs 50 tons and digs underwater trenches up to 1500 meters below the surface provides just the right kind of tool for the job in such necessary industries. Robots such as these are capable of revolutionizing the way energy and information is distributed on a global scale.


On the near horizon or maybe even now, the usage of unmanned water vehicles (UWV’s) has been in the works for both government and private sectors alike. They are able to travel great distances while collecting data remotely without the need for any human support network nearby. As for remote power sources (since these submersibles are not attached to a cable), models which utilize photo-electric solar panel technology have made it through the prototype phase; however, the usage of small nuclear reactors has been researched through paper studies. Below is an example of a UWV:


Continuing with the idea of sustainability, another development in submersible robots that has been making headlines is taking place at the Woods Hole Oceanographic Institution (WHOI) are those which can capture energy from their watery surroundings to be used to power the robot itself. Based on the principles of thermal expansion, the underwater “gliders” are able to travel great distances for extended periods of times - up to two years versus the limit of a few months on comparable machines - with virtually no input. They can be considered like hybrids, as they do run on some battery power to operate steering and on-board electronics - like the Toyota Prius of submersibles.

From the article:
"They can be very helpful in getting measurements that would be too expensive to get otherwise — any kind of study that requires long-term measurements from multiple locations," Hodges told LiveScience. "If you had to be there in a ship, it would cost millions of dollars."


One of the most famous submersibles has to be Alvin and its accompanying ROV Jason Jr., employed by Dr. Robert Ballard in 1986 on some of the first explorations of the wreckage of Titanic. This helped demonstrated the usefulness of robots in hostile environments like what is found at ocean depths of 5,000 feet and temperatures of near zero Celsius. This paved the way for backing of research into more submersible robotics, but also for new adventures for Alvin. BBC reports that it recently underwent upgrades to allow close research of the efforts to solve the DeepWater Horizon catastrophe. Upgrades include the ability to go deeper and stay down longer - down to 6,500m. “When we go down to 6,500m, we will have access to 98% of the ocean. That will make a huge difference - there is so much more to see down there.”


Marine biologists at Monterrey Bay in California have recently been utilizing robots which can follow organisms in ocean environments. As oceans are extremely dynamic ecosystems, having a way to track and record data over long period of time continuously is something that hasn’t been possible until now. These robots have already been following algal blooms, which offer valuable information regarding the future of ocean chemistry and global warming.

Posted by Torrey Dupras & Nolan Jensen

Sources:
http://www.msnbc.msn.com/id/37913126/ns/disaster_in_the_gulf/
http://www.engadget.com/2008/03/22/smd-ultra-trencher-1-starts-its-new-job-laying-pipes-and-cables/
http://www.bbc.co.uk/news/science-environment-11938904
http://www.livescience.com/2277-submersible-robot-runs-sea-heat.html
http://www.nature.com/news/2010/101101/full/news.2010.573.html

Wednesday, April 13, 2011

Robotic Prosthetics

Robotic Prosthetics

Prosthetics are devices used in the medical field that may be used to replace lost or deformed limbs. These devices help to give patients the functionality they may have had previously and sometimes can disguise the limb’s absence. Current advances in science have begun to allow for more advanced robotic prosthetic limbs in order to allow amputees various normal limb functions. 

Prosthetic limbs started off very simple and have been found from dates as early as 300 B.C. Some of the earliest were as simple as a peg used to replace a leg. Between 476 and 1000, Prosthetics began to include metal hooks, to replace lost hands, or fake legs that were strictly cosmetic. These prosthetics were still very simple, but around 600 years later, Ambroise Paré invented a replacement hand with movable fingers and a prosthetic leg equipped with a bendable ankle joint and knee joint that could be locked. These artificial limbs were soon capable of being controlled by other limbs and more recently prosthetics have grown to include more advanced materials and technologies, including hydrolics and robotics. 

 Modern prosthetics are making a great leap in not only functionality, but appearance and construction as well. Compared to their predecessors of heavy rubber and plastic, today’s prosthetics are cutting edge, made with advanced plastics and even carbon fiber, making them lightweight, stronger, and more realistic.    

Although they are changing and improving, the foundation still remains the same.  Most of today’s prosthetics are made of the pylon,  socket, and suspension system. The pylon is an internal frame or skeleton that gives a foundation and structure to the prosthetic limb. The socket is the area of the prosthetic that attaches to the socket, stump at the residual limb or area of amputation.  Also, the functions of prosthetics differ  due to the functionality of the limb lost.  Complicated prosthetics require hinged joints such as a transfemoral amputation, meaning that a patient needs a prosthetic knee, as well as the lower leg and beyond.  Much simpler is the transtibial amputation, or  amputation below the knee, where they are capable of using their current knee. However, with this type of amputation, the foot and lower leg must be created.  Overall, the type of amputation affects the difficulty of limb replacement, which is based on the amount of joints that are going to be artificial supplemented.  

Prosthetics are made on an individual basis. The phrase “one size fits all” certainly does not apply here.  Prosthesist, people who make prosthetics, must cast, mold, and create the prosthetic based upon the individual’s unique needs.  Modern prosthetics are using much more than just rubber, plastic and screws into the bone.  They are becoming advanced enough to actually interact with the nervous system and an individual can control a prosthetic limb with their brain.  Later we will go more in-depth, but basically they use surgically relocated nerves from the amputated limb to tell sensors on the prosthetic to do the appropriate tasks. 
 
So how do they actually work?  There are a variety of prosthetics with varying degrees of functionality.  From simply cosmetic and realistic looking,  to as fully functional as the original limb.  These are much more than a pincer split hook that can open and close.  Basic controllable prosthetic limbs are usually controlled through motors and relays from working limbs or muscles nearby.  These are often done together with firing the opposite leg (right leg) to step forward as the other (left leg) is in the extended, backward phase of walking.  Advanced prosthetics use impulses sent from the brain that are read by sensors on the body actually allowing the person to use the prosthetic limb just as it was meant to be.  For the example of full arm amputation, there is a surgery required where the neurons that used to go to the arm, hand, and fingers have been rerouted to the chest where they help with muscle movement that in turn operates the prosthetic arm, hand, and digits.  Also, they are work with return sensors, controlling pressure and temperature changes that are relayed back to the nerve endings in the muscle, allowing the patients  to get a true feeling of what it would be like if they still had their own limb.



This video does a good job explaining this concept.  


            The current technology does not allow patients to control their prostheses the way they normally would, using their brain, but this technology is currently being developed. A scientist by the name of Dr. John Donoghue, has began creating a system called BrainGate, that is intended to allow for this type of limb control. This system operates by changing brain waves into a form that can be converted into computer commands then utilized by robotic prosthetics. While the system is currently wired directly to the brain, Donoghue and his team hope to someday make this system wireless. Dr. Hugh Herr is working on a microprocessor that, when implanted into the left over limb muscle, can pick up brain signals and cause the limb to move. Once these technologies have advanced, people may be able to control their artificial limbs as they would have their own natural limbs!

By Peter Povolo and Rachael Raspatello

Sources:

Human Universal Load Carrier

A picture of HULC in use Sources:



Human Universal Load Carrier (HULC)

The product that is HULC today has been in development since 2005 when University of California-Berkeley Robotics and Human Engineering Lab members formed a company called Berkely Exoworks (later changed to Berkeley Bionics) developed their original exoskeleton designed to allow humans movement over long distances carrying excessive loads. Called the Exohiker, this original design was later improved to allow for more versatile movement through rugged terrain. This improved design was called the ExoClimber. The third generation design, HULC, was developed in 2008. Exclusive licensing was then sold to Lockheed Martin Corporation a year later in 2009. The latest development has been improving the existing design to make it more rugged and field ready. HULC is currently going through field testing with military personnel. The purpose of HULC is to "allow users to carry loads of up to 200lbs for extended periods of time over variable terrain". It accomplishes this by providing a frame on the user's back that connects to titanium rods on the outer leg of the user. These rods are equipped with a hydraulic system to support and assist natural leg movement. The genius of the design comes from removing the load from a user's back and transferring it to the ground. The minimalist design of the exoskeleton allows for a full range of movement in both the lower and upper body including the ability to perform deep squats and crawl. The construction is primarily titanium. The current HULC model weighs in at 53 pounds without a power source, but most of that weight is bore by the frame and transferred to the ground. Other advantages of this design is that the "suit" can be removed and replaced in seconds with no assistance, is fully adjustable for users ranging from 5'4" to 6'2", can fold down to a small size for transport, broken parts can quickly be replaced in the field, and has limitless possibilities for attachments. HULC can allow a user to job at a steady pace of 7 mph while bursting up to 10 mph with up to it's 200 pound payload. Another cool feature is that, even without power, HULC decreases fatigue of the user by supporting whatever load he or she is holding on their back. The brain of the HULC system, a tiny microcomputer approximately the size of a television remote control, works by sensing the user's movements and calculating, using a complex set of variable skeletal movements, what movement the user is trying to perform. The hydraulic system supports the natural movement of the user, thereby reducing fatigue. There are serious advantages to not relying on reading the electric impulses of muscle contraction like many other exoskeleton or bionic technologies utilize. It eliminates the many biological variables that may vary from user to user concerning strength and frequency of muscle contraction. The user is also able to make adjustments to the settings of HULC spontaneously as they see fit via this microcomputer. The limitations of any device such as this remain consistent; Power source longevity an durability in very rugged and demanding situations. HULC developers are working on a battery that could operate their device for 72 hours continuously. As for durability, the basic design for HULC limits potential component malfunctions and, by design, parts can be switched quickly in the field to restore the capabilities of the device. Because it is already in some of the final field testing, I think we will be seeing HULC in use by foot soldiers in the very near future depending on cost. Beyond that, Berkeley Bionics is already developing a new product called eLegs using much of the technology developed through HULC in an effort to restore paraplegics freedom to walk. Lockheed Martin Corporations promotional video is available here http://www.youtube.com/watch?v=KZ_qR8zCLDc Video of Berkeley Bionics CEO presenting on HULC and eLegs http://www.ted.com/talks/lang/eng/eythor_bender_demos_human_exoskeletons.html

Tuesday, April 12, 2011

Robots Making News: Dante II

Dante II is a famous robot, known best for being the first “successful” terrestrial explorer robot. Successful is in quotations because there were some complications and the robot was eventually lost in a volcano, but this was after Dante II had already gathered the wanted data. Robots started to be used to explore volcanoes after eight scientists were killed in two volcanic eruptions in 1993.
In July of 1994, Carnegie Mellon’s Field Robotics Center launched Dante II into Mount Spurr in Alaska (Aleutian Range). The purpose was to collect data to help researchers understand the inner workings of volcanoes. Dante II was especially unique because of a tether system that allowed the robot to enter the volcano and safely retrace its steps back out. Dante II’s main objective was to sample high- temperature fumaroles gas, but another purpose was to show robotic exploration of extreme terrains that may be found on other planets. 
Dante II making his first and only descent. Note the tether (in red), eight separate legs, and see how many of Dante II's seven cameras you can find.
http://t0.gstatic.com/images?q=tbn:ANd9GcS66Vhj3qYO7OgNZKNluVBCkgQEWpm_zhLNRIPprVIDbViPApRNQQ
 Dante II anchored at the rim of the volcano crater and used its tether cable to lower itself in a rappelling-like manner down into the volcano. The robot was in the volcano for five days; to move around inside, Dante II used both overseen autonomous control and operated control from scientists back at the lab. It was while climbing out of the crater that Dante II lost stability, fell, and the mission ended. Thank fully the computers on board the robot had already relayed all of the information Dante II had to offer.

Dante II had eight legs, all of which could individually adjust vertically to avoid obstacles. The robot had a single drive train that could move the separate frames with respect to each other. The frames could turn to change heading, but it was a slow process. The maximum turn per step was 7.5 degrees, so it was a lot easier to avoid rough terrain as soon as it is noticed. Dante II had seven different cameras and a two-axis laser scanner attached to the main frame. In addition to the cameras, data sensors including gas concentration and temperature were located along Dante II’s undercarriage. There was a satellite communications antenna that allowed data and video from the cameras to be sent to the control station at real time.
Dante II was 3.7 meters long and 3.7 meters high. The robot weighed 770 kg and was able to carry an additional 130 kg. It was able to rappel down the volcano at 1 cm/s up to 300 m. The maximum height which a leg could lift vertically was 1.3 meters.
Dante II was lost at the end of its mission, but it opened the door to robotic exploration in extreme locations. Scientists were able to learn a great deal about locomotion in steep and rocky terrain, as well as, in extreme temperatures. The implications of Dante II’s mission far surpass volcanoes, and technologies that were first used on Dante II are now being used by NASA and other agencies. Data that was once off limits to humans is now available and research is able to advance.
Sources:

Bares, JE, & Wettergreen, DS. (1999). Dante ii: technical description, results, and lessons learned. The International Journal of Robotics Research, 18(7).

http://www.ri.cmu.edu/research_project_detail.html?type=description&project_id=163&menu_id=261