Robot Zips Through Sand on C-Shaped Legs
Remember the last time you walked through sand or soft snow? It’s a lot harder than walking on solid ground! The grains making up the sand (or snow) move and flow with each step.
sand is no problem at all for a new little robot—thanks in part to the science of terradynamics.
Engineers have been studying terramechanics, or the way vehicles with wheels and treads move over solid ground, for a long time. Daniel Goldman and his team at the Georgia Institute of Technology invented the name terradynamics to describe the study of legged creatures’ movement over grainy surfaces. Terra is Latin for “Earth” and dynamics means movement or energy. Many lizards zip easily across both sand and rock, but it’s hard to build a robot that can move over both types of ground. Goldman says, “We’re interested in exploring how limbs can enable robots to move around with the agility and mobility that some animals with limbs have.”
Goldman and his team started with a toy robot called the RoboXplorer, took off its legs, and then made new legs using a 3D printer. They tried straight legs and curved legs, and found that C-shaped legs facing forward worked best. The team also built a computer model that could predict how each new leg shape would affect the robot’s speed across sand, pebbles, glass spheres, and poppy seeds.
This research could help future engineers build better, faster robots for search-and-rescue or exploration missions. Watch the robot run on video here: http://news.sciencemag.org/sciencenow/2013/03/video-robots-on-the-beach.html
A Little Leap for the Plant Hopper
What do clocks, lawnmowers, electric toothbrushes, and a tiny insect called the plant hopper all have common? Gears! The interlocking teeth on gears make much of human machinery possible, and it’s not surprising to find these knobbed wheels inside of almost any appliance in your home. However, no one expected to find working gears in nature. But that’s exactly what researchers have discovered.
A baby plant hopper, or nymph, looks like a cross between a flea and a cricket. It can’t fly, so it relies on jumping to get around. If one leg were to push off before or after the other, the insect would spin sideways instead of leaping forward. To keep those legs coordinated, tiny gear wheels on the hind legs lock together just before takeoff. (When the nymphs grow up, the legs stick together without the need for gears.)
This is the first time scientists have discovered natural gears that evolved to help coordinate motion. Malcolm Burrows of the University of Cambridge in England owes part of the thanks for this discovery to his 5-year-old grandson, who helped search for plant hoppers to use as research subjects. Burrows couldn’t find any in his own yard, but his grandson did!
Meet the DeltaWing: Racecar of the Future
The DeltaWing doesn’t look much like a racecar; it seems more like a rocket ship blasting off across the ground. Two tiny wheels sit close together near the front of a long, skinny nose, while the engine and driver are located far back near the larger rear wheels.
This unique design makes the car the most efficient racer ever built. Designer Ben Bowlby explained to Popular Science: “The DeltaWing goes the same speed with half the weight, half the drag, half the power, and half the fuel consumption.”
Thanks to its small body, light weight, and aerodynamic design, the car can reach 200 miles per hour with an engine that’s only a little bit more powerful than the one in a high-end family sedan. The fact that it uses less fuel and fewer tires than other racecars makes it more environmentally friendly, too.
The futuristic car participated in the 24 Hours of Le Mans race in 2012. The race is a test of endurance, and lasts—you guessed it—one full day. The DeltaWing wasn’t actually competing against the other cars—it’s so different from anything else out there that it got a special spot meant for experimental vehicles.
After six hours of racing, a minor accident with a much larger car damaged the DeltaWing’s steering and it couldn’t continue. But the DeltaWing certainly proved that it’s more than just a crazy idea. Maybe someday all racecars will look more like rocket ships!
Tour Mount Everest . . . from Your Computer!
Hang out at base camp or zoom up to the tops of mountain peaks with just a click of your mouse. Photographer David Breashears created a gigapan photo of Mount Everest, or an image made up of many regular photographs stitched together into an image with over a billion pixels. You can zoom in on the image and explore different parts of the mountain without the picture getting blurry: http://www.npr.org/2012/12/20/167621313/a-billion-pixel-tour-of-mount-everest.
This photo of Everest is fun to explore, but the project is about more than just the beauty of the mountain. Breashears is comparing his images to photos of Everest from the 1950s or 1920s, and it’s easy to see that there’s less ice and snow. “We’re trying to show something that’s happening in extreme slow motion,” Breashears told NPR. “[When] you’re looking at glaciers, you can sit there and stare at them for a long time and nothing happens. They don’t talk to you; their stories are ones that you understand through science.” His non-profit organization GlacierWorks aims to document how climate change has affected the Himalayan glaciers. Take a look for yourself at http://www.glacierworks.org.
Ride the Vomit Comet!
When you watch astronauts floating around in zero gravity, do you ever wish you could try it? Soon, you might be able to.
A new roller coaster design from BRC Imagination Arts in California would send you up a steep slope at speeds over 100 miles per hour in a closed capsule. At the top, the ride would slow slightly, and you’d experience eight seconds of weightlessness. Actually building this remarkable ride would cost about $50 million.
The idea for the coaster came from NASA’s KC-135A aircraft, nicknamed the “vomit comet.” Astronauts in training take a wild ride in this plane in order to prepare for outer space.
Most roller coasters have open seats, and thrill riders with high speeds and steep drops. But this roller coaster, if it gets built, will look more like a spaceship. When you ride it, you won’t be able to see outside, meaning that you won’t realize when the car is speeding up the track or falling down the other side. But you’ll definitely notice at the top of the ride, when you float out of your seat! “The sensation is a bit like being in an enclosed room while someone fiddles with the gravity switch, turning it back and forth,” Bob Rogers of BRC told Popular Science. In order to do this, the ride will have to adjust its speeds based on the weight of the passengers. And yes, the ride will include barf bags, just in case.
Your turn! Would you ride the vomit comet? If you could take anything on board to see what happens to it in zero gravity, what would you bring?
Send your response to firstname.lastname@example.org or: VOMIT COMET, ODYSSEY, 30 Grove Street, Suite C, Peterborough, NH 03458.
The $2 Million Robot Challenge
When disaster strikes, send in the robots! After the Fukishima nuclear power plant disaster in Japan in 2011, robots helped stabilize areas of the nuclear power plant that were too dangerous for people to enter. But it took precious time to deliver the robots and then train workers at the plant on how to use them. DARPA, the Defense Advanced Research Projects Agency, has announced a Robotics Challenge with the goal of developing robots that can navigate human environments and that are easy enough for anyone to use. “Often, in a disaster, the experts who know how to handle the disaster are not robotics experts,” said Dr. Gill Pratt of DARPA.
In order to claim a share of the $2 million in prize money, the winning robots will have to perform tasks such as driving a vehicle, opening a blocked door, and climbing a ladder. Although some teams will be building or modifying their own robots, another track of competitors will be programming software for a standard robot built by the company Boston Dynamics.
The designers of the contest plan to throw out some curveballs, like interrupting the controller’s communication with the robot. “The key to successfully completing this challenge requires adaptable robots with the ability to use available human tools, from hand tools to vehicles,” Pratt said. No one knows when disaster will strike. Someday, robots may be the first responders!
Eyes in the Back of His Head
Is this a joke photograph, art, or a bizarre science experiment?
This is a picture of the back of artist Wafaa Bilal’s head. That round thing looking out at you is a camera! Every minute for an entire year, from December 2010 to 2011, the camera took a picture of whatever was happening behind Bilal’s back.
Bilal got the camera surgically attached to his head to make a point about the past and the things we leave behind without even noticing. “It is anti-photography,” Bilal says on his Web site. No hands or eyes were involved in taking the pictures, and not even the artist himself saw the images until they were shared online with the world.
During the year the project lasted, Bilal carried around a laptop linked to the camera with a USB cable. Every minute, the camera automatically took a picture, sent it to the laptop, and the laptop uploaded the picture to his Web site.
It’s not safe to attach a camera to your skull—doctors refused to do it—so Bilal went to a body-piercing shop. Two months after the procedure, the wound started to get infected and he had to wear the camera around the back of his neck while it healed. But he completed the project.
You can visit and browse through the images at www.3rdi.me. What do you think? Cool, weird, or creepy?
What’s Around the Corner?
What would you do with a camera that could see around corners? Spy on your brother or sister? Win a game of laser tag?
In the May/June 2012 issue of ODYSSEY, we reported on a camera that takes pictures so fast, you can capture light as it travels. Now, using the same technology, scientists at the Massachusetts Institute of Technology (MIT) have built a camera that takes pictures of things hidden around corners. “We have created a new form of photography, femto-photography,” says Ramesh Rankar of MIT. “Femto” refers to the femtosecond laser, which lets out extremely quick bursts of light. To peer around a corner into a room, you point the laser at a wall opposite the room. Light from the laser bounces off the wall; some of this light strikes objects in the room and bounces back to the wall, then back to the camera.
The key here is that the camera tracks both where and when the reflected light returns from the room. It does this by splitting the original beam, and letting half of the burst of light scatter into the room, while the other half gets sent back to the camera right away. The camera measures the time difference between when the two halves of the beam return.
Using this information and some complex math, a computer can build an image of what’s around a corner. Rankar and his team imagine using a camera like this to help firefighters see inside burning buildings, to help cars avoid collisions, and to help doctors navigate inside the human body.
Catch This! See Everywhere at Once with the Ball Camera
Have you ever wished you could see everything around you, all at once? Toss this camera in the air, and 36 cameras simultaneously snap pictures, which can be stitched together to make a 360-degree panorama. “The idea came to me on a hike to the Tonga Islands,” said the camera’s inventor, Jonas Pfeil of Technische Universität in Berlin. After taking many pictures of the beautiful landscape, then stitching them together on his computer to make panoramas, he realized that this tedious process could be a lot easier. The tossable, ball-shaped camera he created makes taking panorama shots way more fun, too!
Tossing a camera? Don’t worry; it won’t break. Foam padding protects the 36 mobile phone camera modules. Inside the ball, an accelerometer measures how fast the ball changes speed. This information is used to trigger the cameras when the ball is at the peak of its flight.
How do you look at the picture? Good question. If you try to look at an entire 360-degree picture at once, it’s a distorted mess. Our eyes don’t work like that. Pfeil and his team also created a special viewer that lets you explore the space of the photograph. You can look around the entire scene—left, right, up, down, and even behind you. Cool! Check it out for yourself on the Web site: http://jonaspfeil.de/ballcamera.
Scream for the Camera
Forget about saying “Cheese.” If you want the Screamotron 3000 to take your picture, you have to yell as loud as you can. With some help from a local camera shop, photographer Billy Hunt took a boom box and microphone meant for karaoke and attached a Pocket Wizard, a device that can wirelessly trigger a camera to fire. The system is rigged so the Pocket Wizard will only tell the camera to shoot when the decibel level from the microphone input is high enough. In other words, you have to scream your head off or it won’t work!
“Some people scream inward and some people scream outward . . . and some people go to crazy town.” Billy Hunt told National Public Radio (NPR).
His goal with the project is to get past the self-consciousness most people feel in front of a camera, and to access deep, primal emotions. View a gallery of scream portraits here: http://www.screamotron3000.com/.
Robots vs. Humans: Who Gets to Carry the Torch?
The Olympic flame burns brightly as torchbearers pass the shining symbol of the Games from hand to hand to . . . robot?
To kick off the 2012 Summer Olympic Games, 8,000 people will carry torches in a 70-day relay that will snake its way around the entire United Kingdom to end at the Olympic Stadium in London. To choose those lucky 8,000 torchbearers, the Olympic committee asked for nominations. They certainly didn’t expect a robot!
James Law, a computer scientist at the University of Aberystwyth in Wales nominated iCub, a robot that ODYSSEY readers first met in the October 2008 “Robo-Buddy” issue. The robot iCub looks like a human toddler, and learns about the world in much the same way as a human child does. Law says, “A robot torch bearer would be a fitting tribute to Alan Turing, and an inspiration to future generations of scientists and engineers.” Turing is often called the father of modern computer science and artificial intelligence, and 2012 marks the 100th anniversary of his birth.
The catch is, Olympic torchbearers must be at least 12 years old, and must have a registered address within the United Kingdom. The committee would have to bend the rules to allow a robot to participate.
What do you think? Should a robot be allowed to bear the Olympic torch? Send your opinion to email@example.com or: ROBOT VOTE, ODYSSEY, 30 Grove Street, Suite C, Peterborough, NH 03458.
A Paralyzed Man Walks—with a Bionic Exoskeleton!
Robert Woo lost the use of his legs after a load of steel fell on him in a construction site accident in 2007. He uses a wheelchair, and didn’t expect to ever walk again.
Then came Esko Bionics, a company that invited Woo to test a bionic exoskeleton. The device straps over Woo’s paralyzed legs, and contains 15 sensors, a small computer, and two small motors. Right now, the bionic legs have to be controlled by a physical therapist, and Woo uses a walker for balance. However, when the final product comes out, hopefully by 2013, owners should be able to operate the controls themselves.
“I was excited but also nervous,” Woo said about walking for the first time in four years. “I didn’t know what to expect.” His kids, ages four, five, and six, definitely think it’s cool. “They watch a lot of cartoons, so they think Daddy’s a robot!”
Exoskeleton — A supporting structure on the outside of the body
From Touchscreen to No Screen: An Imaginary Smart Phone?
If you or your parents have a touchscreen smart phone that you use all the time, try an experiment. Cover your eyes with a blindfold and see if you can open your favorite app. Can you do it?
Now take off the blindfold, put your phone away, and try tapping the palm of your hand like you’re opening the same app. It may seem silly, but if you use a phone a lot, it’s likely that you can remember exactly where your favorite apps are just by pretending that the phone is actually in your hand. “Imaginary Phone,” a research project at the Hasso Plattner Institute in Germany, makes it possible to access a real smart phone across the room just by tapping your own hand.
The Imaginary Phone prototype uses a depth-sensitive camera to watch where you tap or swipe on your palm. Then, software maps those locations to your phone and a wireless radio tells the phone what to do. Sounds cool, but what’s the point of opening an app from far away? Well, if you were washing dishes or digging in the dirt outside and your phone rang, you wouldn’t have to wipe off your hands to answer—you could just tap your palm.
Unfortunately, the current Imaginary Phone device requires wearing a camera mounted on your head so you look like a miner navigating a dark cave. But eventually, smaller cameras could be developed that fit into a button on your shirt. Then you could send secret text messages without ever taking out your phone!
Planes Make It Snow
Is it raining or snowing where you are? You might have a jet airplane to thank for the crummy weather. In just the right conditions, an airplane breaking through the clouds can actually leave behind a perfectly shaped hole and a small rain- or snowstorm. “Hole-punch” and “canal” clouds, named for their odd shapes in the sky, have intrigued meteorologists for over 70 years. They knew planes could create these effects, but how remained a mystery.
Scientists observed the science in action for the first time on a research trip over Colorado in 2007. Their plane carried video cameras pointing ahead and below. “Just as we crossed underneath the hole, we could see a big band of snow directly beneath it and not on either side of it,” described Andrew Heymsfield of the National Center for Atmospheric Research (NCAR) in Colorado. But it wasn’t his team’s plane that created the hole or caused the storm, which dropped two inches of snow on a long, skinny band 20 miles long and 2.5 miles wide.
Where was the snowstorm culprit? Two turboprop jets had recently flown similar paths from the Denver International Airport. Their huge propellers pushed lots of air back, cooling it down in the process. On this day, they happened to fly through a special kind of cloud containing supercooled droplets, which can stay liquid even at temperatures as cold as –35 degrees Fahrenheit. When the planes flew through, the cooling effect of the propellers was enough to freeze the droplets and send them dropping toward the ground, freezing any other droplets they happened to touch. As Heymsfield put it, “A mass of cloud particles that were then converted to ice just grew and swept out all the cloud below them.” The result? A storm and a weird-looking hole in the clouds.
Like a lot of scientific discoveries, this one happened by accident. Heymsfield recalls, “I was sitting in back of the plane. And then this data set just fell in our laps. It was a lucky break.” Because the conditions have to be just right, planes don’t cause very many storms. But if you keep an eye out for holes in the clouds near airports, you just might get to see a jet airplane make snow!
meteorologists -- one who studies and forecasts the weather
Imagine an aircraft that morphs as it moves, changing wing shape or nose shape to adjust to conditions. Or a robot that could dissolve into an almost liquid form to flow through a tiny opening, then reconstruct on the other side. At DARPA, the Pentagon’s Defense Advanced Research Projects Agency, researchers are taking the first steps toward this sci-fi supertechnology. The paper-thin robot they came up with, called a “smart sheet,” folds itself into either a boat or paper airplane shape. “Smart sheets are Origami Robots that will [eventually] make any shape on demand for their user,” says Daniela Rus of the Massachusetts Institute of Technology (MIT).
It sounds like magic, but the trick is all science. Rigid, triangular tiles connected with elastomer (stretchable plastic) joints make up the surface of the robot. Tiny motorized switches and other electronics cover the surface, waiting to tell each part how to fold. Tiny magnets hold the pieces together, once they finish folding.
The robot follows a four-step process to create a shape. First, it analyzes a three-dimensional image of the shape it wants to be and works backwards, basically “unfolding” the shape and recording each step. Then, it produces a plan, called an algorithm, for how and when each individual tile will need to fold. Third, the individual plans are spread out over the sheet, and finally the plan is optimized to use the least number of folds possible. “A big achievement was discovering the theoretical foundations and universality of folding and fold planning,” says Rus. Rus and her colleagues have imagined all sorts of creative uses for their technology, including a futuristic Swiss army knife that forms any tool you need.
Your turn! What do you think a smart sheet would be useful for? Send your ideas, complete with illustrations if you’d like, to firstname.lastname@example.org or write to: TRANSFORMERS, ODYSSEY, 30 Grove Street, Peterborough, NH 03458.
Blind Soldier Sees with His Tongue!
Special glasses plus an electrified lollipop equal good news for people living with blindness or other sight impairments. British soldier Craig Lundberg was hit and blinded by a rocket-propelled grenade while serving in Iraq. Now he gets around with the help of a guide dog named Hugo. Recently, Lundberg tried out a new technology called BrainPort, which makes it possible to sense basic shapes or even letters on the tongue!
Lundberg puts on stylish sunglasses with a camera mounted between the eyes, and sticks a plastic “lollipop” on his tongue. The camera sends images to a hand-held device that looks sort of like a remote control. The device translates black, white, and gray pixels into electrical signals that get sent to the tongue. An object gets translated as a strong sensation, while background has no stimulation at all.
“It’s like licking a battery. . .electrical, tingly,” Lundberg told BBC news. As weird as it feels, the device works! “You get lines and shapes of things; it sees in black and white so you get a two-dimensional image on your tongue,” Lundberg explains. With the BrainPort in place, he was able to read the top line of an eye chart, and to reach out and pick up objects without fumbling around. Learning to figure out how the tongue tickling translates to sight takes some training, but Lundberg feels like it’s worth it. “I am a realist. I know this isn’t going to give me my sight back, but it could be the next best thing.” But, he plans to keep his guide dog just the same!
Can’t speak? No problem. You can type words with your brain! Just concentrate on the alphabet letter you want, and it will appear. Computer-mind communication sounds freaky, and it’s still a long way from perfect. But researchers at the Mayo Clinic in Florida have proved that brain waves can be translated into the alphabet.
This exciting news could mean great things for patients with diseases that limit movement to the point that speech or other communication is next to impossible. Examples include Lou Gehrig’s disease, spinal injuries, and “locked-in syndrome.” This scary disease means that you’re awake and aware, but only your eyes can move!
To try out this new technology, you need special surgery, called a craniotomy -- a cut that goes through your skull -- in order to get electrodes placed directly on the surface of your brain. Obviously, the researchers couldn’t just start doing brain surgery on test subjects. So they worked with two epilepsy patients who already had electrodes in place for monitoring seizures. Reading brain waves from electrodes placed directly on the brain is much more accurate than trying to read signals from sensors on the scalp, because bones and skin can interfere with the signals. But most mind-reading research has been done with sensors on the skin, not on the brain. “That’s why progress to date on developing this kind of mind interface has been slow,” explains lead researcher Jerry Shih.
To use the mind-reading machine, a patient looks at a computer screen with a six-by-six grid of letters. First, he concentrates on each letter, one at a time, while a computer records his brain waves. This step enables the computer to learn and remember how an individual’s brain waves match to the letters. Once the calibration is complete, the patient can start typing --without touching anything! “We were able to consistently predict the desired letters for our patients at or near 100 percent accuracy,” Shih says. Now, researchers just have to work on reading what the patients really think about the hospital food. . .but that will be a bit trickier!
Robot Arm of the Future
Luke Skywalker had a robotic arm so real you’d never know it was metal. . .unless Darth Vader tried to slice it off again. This science fiction prosthesis, or artificial limb, was the inspiration for inventor Dean Kamen’s newest technology: the Luke Arm. Kamen completed the project thanks to an assignment and funding from the Defense Advanced Research Projects Agency (DARPA).
The challenge: Create a robotic arm prosthesis, completely self-contained, that can pick up a grape without squishing it or a raisin without dropping it, and make it weigh less than nine pounds. At first, Dean Kamen thought, “They’re nuts. They’ve been watching too much Terminator.” But after a little bit of research, he started to understand how much the world of prosthetics needed to advance. A hundred years ago, he explained, we gave a wounded soldier a wooden stick with a hook on it. “Now, we give him a plastic stick with a hook on it.” Most amputees rarely wear their prosthetics. The devices simply aren’t comfortable and don’t really help with most everyday tasks.
Kamen had his work “in hand”! At the end of only one year, he had an arm that could do everything DARPA wanted. The four most important aspects of the Luke Arm are weight, modularity, motion, and controls. Lightness of weight is important so the device is comfortable and easy for people of all different sizes to carry. Modularity means that the arm can be adapted to people with different degrees of amputation. If you only need a hand, the hand comes off and works by itself. If you need everything up to the elbow, you add on a few more pieces to the hand. Motion is extremely important. Older prosthetics had only three degrees of motion, while a real arm has 22! The Luke Arm has 18; it can even reach upwards. Chuck Hildreth, who was chosen to test out the Luke Arm, lost both of his own arms when he was electrocuted as a teenager. Wearing the new arm, Chuck picked up grapes, poured himself a drink, ate cereal and milk with a spoon, and stacked paper cups. Hildreth said in a video interview with IEEE Spectrum Online, “I can’t wait to get one of these. Actually my wife can’t either. She says to me, ‘I’ve got a lot of stuff for you to do around the house.’”
How does Chuck control the arm? He uses sensors in his shoe. Pressing with his big toe moves the arm out. Other toes rotate the wrist or control the grip. Feedback from the arm is sent to a sensor on Chuck’s side -- the strength of the vibration he feels there tells him how strongly he’s gripping an object. But mechanical controls like this are only one possibility. Kamen is also working with other researchers who have demonstrated the ability to attach sensors to patients’ own nerves and tissue so they can control prosthetic devices with their brains!
Skinput: Your Skin Is the Keyboard!
You’re walking to school, when your iPod shuffles to a song you just heard. Imagine snapping your fingers to skip ahead. Or tapping your wrist to turn up the volume when your favorite tune comes on.
Brand new technology called Skinput makes all of this possible. Just like the name implies, this invention projects a keypad onto your skin so you can input information into a mobile device like an iPod or cell phone. You won’t be able to buy a Skinput device for at least a few more years, but the prototype is creating a huge buzz in the mobile computing world.
You want your cell phone or iPod small and easy to carry. But at the same time, a tiny keyboard or screen feels cramped and frustrating, especially if you’re trying to use your phone to surf the Internet or send emails. Developers have tried projecting keyboards onto tables, but there isn’t always a convenient table nearby when you need to use your phone. “What’s great about skin, unlike tables, is that it travels with us,” says inventor Chris Harrison, a Ph.D. student at Carnegie Mellon University in Pennsylvania. The solution, as Harrison and his colleagues see it, is to separate the device and the input, using the human body to communicate. “We spent a lot of nights in the lab tapping on our arms and wondering if this would ever happen,” Harrison told CNN news.
Here’s how it works: When you tap your skin or snap your fingers, the action sends a tiny wave of vibration and sound down your arm and through your bone. Try it. Tap your hand but pay attention to your forearm. Do you feel it vibrate slightly? This vibration is somewhat different depending where you tap. When you strap Harrison’s prototype above your elbow, a tiny projector displays buttons on your skin. Sensors take a few minutes to adjust to your particular arm. Then they can tell where you tapped by listening to the vibrations, and send the information to a phone or other device. Right now, Skinput is only accurate with five buttons. It would need to have ten or more to work as a keyboard.
The great thing about using the body as a keyboard is that you’re already familiar with it. A sense called proprioception lets you accurately touch your nose, knuckle, or knee without looking. Once you learned which spots on your body to tap to run your device, you’d never have to see what you were doing to do it again.
Of course, if you wanted to play Tetris on your arm by tapping to rotate the blocks, you’d probably want to look!
Watch out silicon valley, there’s a new kind of memory on its way! Memristor stands for “memory resistor.” Transistors, resistors, and capacitors are all fundamental building blocks of electrical systems described in electrical engineering textbooks. Memristors, however, always have been a strange fourth cousin. Leon Chua of the University of California at Berkeley showed they were theoretically possible in 1971, but a true memristor had never been both demonstrated and identified in reality, until now.
Dmitri Strukov and colleagues at HP Labs in Palo Alto, California published their discovery in the April 2008 issue of the journal Nature. They are currently building and testing memory devices based on memristors made from a titanium dioxide sandwich. Between two metal wires are two layers of titanium dioxide. The bottom layer is pure, but the top layer is missing some of its oxygen, leaving little atomic holes in the material. When electricity flows through the top wire, these holes get pushed into the bottom layer. The side with the holes offers less resistance to electric current (more electric flow can get through it at one time). This process of pushing the holes back and forth can be repeated again and again, in effect switching the memristor between “1” and “0,” the essential building blocks of all computer memory.
Most importantly, when the power is shut off, the memristor “remembers” its state. This means that a computer using memristor memory would most likely never need to be rebooted. You could turn the machine off with all your windows open, and then turn it back on with every window in the exact same place you left it. Flash and hard disk memory are also non-volitile (they don’t forget) -- which is why your files are still there in storage when you turn the computer back on or insert your flash disk -- but these memories are slow. When working on your computer, you’re using random access (RAM memory), which is very fast, but “forgets” everything if you lose power. Memristance could provide non-volatile, speedy memory.
The reason memristors went undiscovered for so long is simply that their significance is proportional to the size of a device. The tinier the circuit, the more memristance matters. HP Lab’s memristors are built on the nanoscale: about a thousand times tinier than the width of your hair. And as devices get even tinier, memristance only gets stronger.
Once upon a time, it took a machine the size of your refrigerator to store just five megabytes of information! Now, you can carry around several gigabytes clipped to your keychain. Memristance may eventually allow researchers to squeeze terabytes into a keychain-friendly space, ushering in a whole new frontier in computing.
Life from Space Dust
Bullet-sized particles bit through the spacecraft Stardust’s shield as it fought through the jets of comet Wild 2 (pronounced “Vilt-2”). The mission? To collect comet dust and gas and bring it back to Earth. NASA scientists hoped for evidence to support the theory that some of life’s ingredients formed in space, hitching rides on comets and meteors that pelted Earth.
Stardust survived its comet encounter, and a collection capsule packed with extraterrestrial dust returned to Earth in January 2006. Right away, it was clear that the dust contained an amino acid called glycine. “Glycine is an amino acid used by living organisms to make proteins, and this is the first time an amino acid has been found in a comet,” said Jamie Elsila of NASA’s Goddard Space Flight Center in Maryland. Amino acids have already been found in meteorites.
But before NASA could announce the discovery, they had to make sure that the space dust sample wasn’t contaminated. “It was possible that the glycine we found originated from handling or manufacture of the Stardust spacecraft itself,” said Elsila. Careful analysis revealed that the glycine contained Carbon-13, a special form of the carbon atom that is much more common in space than on Earth. “[This discovery] strengthens the argument that life in the universe may be common rather than rare,” said Carl Pilcher, director of the NASA Astrobiology Institute.
A “School” Full of Zebrafish!
What do your teachers do over summer vacation? In Rochester, Minnesota, teachers from the Lincoln K8 Choice Public School spent the summer studying zebrafish. During the school year, their students will be doing some “fishy” scientific research of their own. Dr. Stephen Ekker, a biochemist and molecular biologist at the Mayo Research Clinic, also in Rochester, is trying to change the way kids learn science. “Instead of trying to bring scientists in to teach a class or convincing teachers to become world-class scientists, we happened across a compromise,” says Ekker. The idea is for kids to do real research in science class, and then connect what they’re learning in science to the rest of the school day and their everyday lives, too.
Ekker brought together Lincoln school teachers specializing in all different subjects at the Mayo Clinic to develop zebrafish modules, specific experiments designed to collect data about the little striped fish. Lincoln students will also use the zebrafish in lessons in reading, writing, and history. The Zebrafish Core Facility, a genetics lab at the Mayo Clinic, will provide the adult and embryo fish that the students need to do their projects. These kids may even make some totally new scientific discoveries!
Why study zebrafish? “We share 75 percent of our genome with the zebrafish, and the fact that their development occurs fully visibly outside the mother allows us to learn much about genetics and development [from them],” says Ekker. The kids will research different genetic strains of the fish, and some classrooms will even get their own fluorescent microscopes. The teachers thought this specialized equipment would be way too expensive to use in the classroom, but that didn’t stop Ekker. “He’s been cobbling together microscopes for them out of spare parts,” says Elizabeth Zimmerman, spokesperson for the project.
Fluorescent microscopes -- Microscopes that shine ultraviolet light on a material that either fluoresces (glows) naturally or has been colored with fluorescent dye so that it will glow.
It’s a T-shirt! No, It’s a Camera!
Cameras are everywhere -- in cell phones, computers, high-security buildings and parking lots. But someday, instead of tossing a slim camera phone in your pocket, your pocket might be the camera, thanks to a recent breakthrough by researchers at the Massachusetts Institute of Technology (MIT) in Cambridge, Massachusetts. Engineering Professor Yoel Fink and his team managed to use a web of fibers to take a picture of a smiley face. The picture was black and white and kind of blurry, but “this work constitutes a new approach to vision and imaging,” says Fink. It’s the first time anybody has managed to take a picture with a mesh of fibers rather than a lens. The problem with lenses like the ones in our eyes and all modern cameras is that they can easily be damaged causing blindness. A camera made of special fibers woven into fabric is much more durable -- if one fiber is damaged, the others can still “see.”
To create their special optoelectric fibers, Fink and his team first form a tube out of layers of light-detecting materials. Then they heat the tube in a furnace and carefully draw out super thin strands -- only two or three times the width of your hair. These thin strands retain the same structure as the original tube, but are much, much smaller. The smaller the strand, the higher and sharper the final image will be.
Inside the individual strands, two layers of semiconductors measure light intensity and wavelength. Adding a third layer could theoretically allow the strands to detect color. Weaving the fibers into fabric allows the flexible surface to “see” and send electrical signals to a computer, which focuses all of the separate strands’ information into an image. “While the current version of these fabrics can only image nearby objects, it still can see much farther than most shirts can,” says Fink. He’s got that right!
Optoelectric -- Combining visual and electric functions
Semiconductors -- Materials that allow electricity through better than insulators, but not as well as conductors
Look out, Einstein, a robot wants your job! This robot, named Adam, may be the first non-human to ever independently think up and test hypotheses in order to discover new scientific knowledge. Adam’s discoveries so far all have to do with yeast genetics -- not nearly as mind-blowing as Einstein’s theories on relativity, but still impressive when you realize that nobody told Adam which yeast genes to study. “Adam makes up its own mind what to do,” Ross King of Aberystwyth University in the United Kingdom, the robot’s creator, told CBC News in Canada. “It decides what experiments to do, what to test.”
Of course, Adam’s choices are limited by the information King feeds it and the lab equipment it has access to. The physical robotics system includes lots of microplates (for growing yeast cultures), robotic arms, incubators, a freezer, liquid dispensers, fans, and other equipment useful for biological research. Adam also has loads of data on yeast and other organisms. To decide what to do, Adam finds a place where the yeast genetic data is incomplete, then searches for complete information about similar genes in other organisms. By comparing all of this data, Adam is able to form a hypothesis. . .and start experimenting. Adam can begin up to 1,000 new experiments each day!
Why yeast? Biologists use this simple organism as a model for more complex ones, like human cells. So far, Adam has figured out the functions of 12 different yeast genes. When King and his team tested Adam’s results manually, everything was correct. Eventually, Adam will be able to move beyond yeast -- as long as King uploads the data necessary for new experiments. King’s team has also built a new robotic scientist named Eve. This robot will screen new drugs for diseases like malaria.
Robotics has been useful in scientific laboratories for a long time, but usually the machines just do the work and generate data that humans have to sort through. This is the first time a robot has not only designed its own experiment, but determined its own results! Still, modern Einsteins shouldn’t worry about being replaced -- robot scientists like Adam are much more likely to be lab assistants than brilliant theorists.
Your turn! What do you think a robot scientist should look like? Email your drawing to email@example.com or write to: ROBO-ART, ODYSSEY30 Grove Street, Suite C, Peterborough, NH 03458.
Miraculous Camera for the Blind
Elizabeth Goldring has been blind most of her adult life. She can sense only dark and light with one eye, and has very limited vision in the other. But she can surf the Internet, look at digital pictures of her family, and even take photos with a special device she helped develop as a senior fellow at Massachusetts Institute of Technology’s (MIT) Center for Advanced Visual Studies.
The story of Goldring’s “seeing machine” starts over 20 years ago, with a visit to her optometrist. He hooked her up to a device called a scanning laser ophthalmoscope (SLO) for a routine test. The machine projected an image directly onto Goldring’s retina, and she could see it!
“I asked if they could write a word, and they wrote the word ‘Sun,’” Goldring says. “It was the first word I’d seen for many months,” since her degenerative eye conditioned had worsened. She knew immediately that she had to find the machine’s inventor and figure out a way to share this technology with other visually impaired people.
The biggest problems in reaching that goal were price and size: a medical SLO is quite large and costs about $100,000. The SLO’s inventor, Rob Webb of the Schepens Eye Research Institute at Harvard University in Boston, Massachusetts, collaborated with Goldring and a team of MIT students to develop the current prototype. It can be made for under $500 and carried around in one hand. The “seeing machine” can be hooked up to any technology with a visual feed, such as a computer, video camera, or digital camera. The visual information travels to a liquid crystal display (LCD) screen within the seeing machine. Then, light-emitting diodes (LEDs) project the image onto a special lens that focuses it into a tiny spot of light on the retina.
The seeing machine won’t work for every visually impaired person -- the retina has to be functional in order for the images to be processed by the brain -- but for those with impairments like Goldring’s, who may be forgetting what it’s like to see a loved one’s face, such a machine would truly be a miracle. Goldring says, “I can’t believe that this eye that sees nothing can look into this machine and clearly see an image.”
. . .And Speedy, Hot Pink Submarines
“Welcome home, seafaring robot!” Scientists in Australia celebrated the successful first voyage of SG-154, a remote-controlled submarine that can dive down as deep as half a mile to measure and transmit data on currents deep below the ocean’s surface. The first mission, though, was less about measuring and more about remote-controlled diving practice. Instead of using a motor to dive, the submarine moves in a vertical zig-zag like a porpoise. Winged gliders help keep the submarine on course, and an oil-filled chamber inflates to handle pressure changes. “[SG-154] doesn’t have any propulsion to help it move forward or backwards -- it just glides. So if the currents are too strong it can be a real problem,” says Ken Ridgway, senior researcher for Commonwealth Scientific and Industrial Research Organization (CSIRO) in Australia. The pink porpoise robot’s next mission: to measure ocean currents and conditions. The real-time ocean data that SG-154 measures can be used for everything from planning shipping routes to forecasting the weather.
Half Bike, Half Laudromat!
Next time you ride your bike down the street, think about all the energy building up as you spin those pedals. What else could your bike’s energy be used for? I bet your first thought isn’t to do laundry. But wait until you see the bike-pedal-powered washing machine a team of Massachusetts Institute of Technology (MIT) students and staff designed and built.
In places where people have to carry water by hand to wash clothes in buckets or in a river or stream, the simple chore of doing laundry can take eight hours per load. And the process adds to water pollution. Fancy, high-powered washing machines don’t help if you have no money to buy them and have no place to plug them in. The MIT team’s challenge was to make a washing machine from spare parts that could run without electricity. The result? The “bicilavadora,” a name combining the Spanish words for “bicycle” and “washing machine.”
The team took their prototype to an orphanage in Ventanilla, Peru. With the orphanage’s 670 kids, there were plenty of clothes in need of washing! The test, however, had a problem: Some water leaked out around the edges, which could cause the outer metal barrel to rust. But the team is confident that they can make a more robust machine with only a few changes.
The outer barrel of the bicilavadora is made from pieces of an old metal oil barrel. The clothes go into an inner drum made from special plastic panels designed by graduate student Radu Raduta. A gear on the outside of the drums connects to a bike chain and frame. “It uses a standard mountain bike gear range,” explains Gwyndaf Jones, the instructor who led the trip to Peru. “The highest gear is the spin cycle, and the lowest gear is the wash cycle.” All you have to do is fill up the inner barrel with soap and water, then close the machine and start pedaling! Holes in the plastic inner barrel allow soap and water to flow in and out during washing and rinsing. After the water is all drained out, the wet clothes whip around like lettuce in a salad spinner. The almost-dry clothes are then hung on a clothesline to dry completely. The washing process is quick (about an hour) and allows plenty of time for more important work, like keeping 670 kids at an orphanage entertained.
Think about it! What other cool things could you make using the parts from a bicycle? Draw a picture of your invention and explain what it can do. Email your response to firstname.lastname@example.org or write to: BIKE INVENTION, ODYSSEY30 Grove Street, Suite C, Peterborough, NH 03458.
Robust-- Strong or long-lasting
Uncle Sam Wants You
Hang out in the Alienware computer gaming area and play high-speed Internet military games like Tom Clancy’s Ghost Recon: Advanced Warfighter 2 or Call of Duty. Better yet, go to the simulator room and join one of three missions that bring to life authentic battle scenarios. In fact, sit in a real Humvee and “fire” on the enemy projected on a 15-foot-high battleground scene complete with surround-sound effects. Yes sir, Uncle Sam Wants You! To get you, the U.S. Army has invested $12 million dollars in a state-of-the-art, one-of-a-kind army recruitment facility.Located next to a Banana Republic store in the busy Franklin Mill Mall in Philadelphia, the U.S. Army Experience Center (AEC) is a two-year experiment. It’s a playground atmosphere that is all military. In addition to the gaming area, you can find a Tactical Ops Center, a Career Navigator, a lounge, and even a café. You can reserve an area of the 15,000-square-foot Center for clubs’ or educators’ meetings. And there’s no hard-core sales pitch -- kids 13 and up can play. However, just in case you’re thinking about joining the U.S. Army, you’ll find recruitment officers in polo shirts on duty to answer your questions.
Even with this low-pressure approach, the Center has its critics. Some soldiers wonder if the use of video games glamorizes war and presents an unrealistic view of what it feels like when people get killed. John Grant, an Army veteran and member of the Philadelphia Chapter of Veterans for Peace says, “They’re using $12 million of taxpayer money to sell militarism to kids using video games, to brand the military in a positive way. This is an unfair recruiting method. Where is there in the Center something that shows veterans wracked with post-traumatic stress syndrome? Video games cannot simulate real combat.” But Pete Geren, Secretary of the Army, disagrees. In a press release announcing the Center’s opening, he said, “Potential recruits are afforded a unique opportunity through the Army Experience Center to learn what it means to be the best-led, best-trained, and best-equipped Army in the world by allowing them to virtually experience multiple aspects of the Army.”
Who Needs Blood and Gore?
Think of your favorite video game. What makes it fun to play? If you’re anything like most video game players, it’s not the blood (if there even is any in your favorite game!). Instead, your answer might be: “I like when I manage to beat a hard boss” or: “I love leveling up my character and choosing new skills.” That’s right, feeling in control or victorious, having lots of choices, and choosing the best strategies are much more important to most gamers than the amount of blood and guts, according to a series of two surveys and four studies by psychologist Richard Ryan of the University of Rochester in New York.
In three small experimental studies, Ryan and his team programmed different levels of violence into popular games. In a group of 36 male and 65 female college students, half played the original, violent version of Half-Life 2 and the other half destroyed their enemies in a much happier way. “Instead of exploding in blood and dismemberment, they floated gently into the air and went back to base,” Ryan told Science News. A different group of 39 male gamers (mostly around 19 years old) played The House of the Dead III set to either high violence (spouting blood) or low violence (green goo). The result? It’s not the goo or the blood that’s thrilling; it’s the feeling of victory.
In his research, Ryan didn’t forget to check for people who tend to be more hostile and angry in day-to-day life. They must like more blood, right? Wrong! Subjects who got high scores on psychological tests of aggression tended to prefer games advertised as violent, but when they actually played less violent versions of those games, they reported having just as much fun.
Although none of the studies involved kids, Ryan thinks this is good news for video game makers, parents, and players. Games don’t have to be bloody to be lots of fun!
Your turn! What’s your favorite video game and what makes it fun? Email your answer to email@example.com or write to: JUST FUN, ODYSSEY, 30 Grove Street, Suite C, Peterborough, NH 03458.
It sure is fun to be a wizard, superhero, or mad scientist in a virtual world like Teen Second Life or World of Warcraft. But no matter how hard you try to create an avatar who’s nothing like you, the way you play with that avatar has a lot to do with how you play in real life. If you’re a girl, you’ll most likely want to socialize with other characters. If you’re a boy, you’ll probably look for more fast-paced action games.
It’s pretty obvious that boys and girls tend to play differently, but it might be a surprise to learn that this is true across cultures, and even in virtual worlds where you’re trying to pretend to be someone else! Psychologists at Georgetown University in Washington, D.C. let 126 fifth graders loose in a MUD. No, that doesn’t mean squishy dirt; it stands for Multi-User Domain, a fancy term for a virtual world.
In this particular MUD, the kids got to pick a name, sex, and costume for their avatars. They could be a normal kid in a T-shirt and jeans, a punk kid in a leather jacket, a soccer player, a firefighter, or a wizard. Using a computer mouse, the kids could then switch between background scenes, make their characters move, change facial expressions, and talk in speech bubbles.
The researchers were interested in something called gender-bending, when a girl chooses a boy avatar or vice versa. Only 13 percent of the fifth graders were gender-benders, and they were more likely to have fun with opposite-gendered avatars if they were sitting and playing together in a room with a friend of the same sex. When girls and boys who knew each other tried to play together, they often had trouble agreeing on what kind of game to play. The girls wanted conversation games; the boys wanted action games. This isn’t a bad thing, though. “MUDs can provide a virtual play space for preadolescent children to discover who they are,” Sandra Calvert, one of the study’s authors, told Science News. I wonder if any kids discovered that they wanted to grow up to be real wizards?
Do you spend hours playing in virtual worlds? What kinds of games do you play, and what was the craziest avatar you ever created? Email your name and a description of your avatar to firstname.lastname@example.org or write to: MY AVATAR, ODYSSEY, 30 Grove Street, Suite C, Peterborough, NH 03458.
Are Wii Having Fun Yet?
Want to help science by playing your Wii? Head on over to Rice University in Texas where professors Marcia O’Malley and Michael Byrne are testing the Wiimote’s motion capture abilities to learn about, well, learning. They plan to record people as they play games using different motor skills, such as swinging the Wiimote as a virtual tennis racquet to hit a virtual ball, and analyze what happens as subjects get better at the game. Eventually, the data O’Malley and Byrne collect may be useful for creating something like a robotic sleeve that helps you improve your tennis game by gently guiding you to fix your swing. The pair has earned a National Science Foundation (NSF) grant to fund their research for the next three years.
This project follows up on O’Malley’s previous work developing a computer system using a joystick to help stroke victims recover simple motor skills. When the user makes a wrong move, the joystick resists the motion, guiding the hand along the right path.
O’Malley and Byrne are now interested in more complex motor skills and in three different types of learners: “experts” who learn a new motor skill at a steady pace until they figure it out; “novices” who learn at the same pace but may never figure it out; and others, who O’Malley says, “start off awful, but somewhere in the middle of training . . .suddenly ‘get it.’” It’s Byrne’s job, as a specialist in computer-human interaction, to figure out when, where, and how that “I get it!” moment happens. He’ll do that by analyzing computer data on the range of motion used in performing a motor skill. The experimenters hope to then use their results to help people learn the skill faster, with less trial and error. “Using the Wii will be a great way to recruit subjects,” says O’Malley. “We can say, ‘Hey, kids, come play some games!’”j
Teddy BEAR to the Rescue!
It may have a head like a teddy bear, but the Battlefield Extraction Assist Robot (BEAR) is much more than a friendly face. It can carefully pick up a wounded soldier, then squeeze through doorways, climb stairs, and zoom across smooth surfaces on wheels to carry the soldier to safety.
The robot’s strange-looking lower body helps it switch between different kinds of motion. It can stand up on its “tiptoes” at its full six-foot height to walk over rough terrain, or it can fold its legs down into a tread, like that of a tank, to travel quickly. When it needs to pick something or someone up, it gets down on its belly and slides its arms underneath like a forklift. It can lift about 500 pounds in one fluid motion, thanks to its hydraulic system.
Vecna Technologies developed the first BEAR prototype in Cambridge, Massachusetts, in 2007. “We saw a need for a robot that could essentially go where a human can,” says Daniel Theobald, Vecna’s president. According to Theobald, the BEAR can’t think like a human, yet. The current prototype is like a giant remote-control robot. It has cameras and microphones so its controller sees what it sees and hears what it hears in order to lead it across a battlefield. Eventually, Vecna plans to build a BEAR that is autonomous.
Does this all sound more than vaguely familiar? That’s because fiction writer Angie Smibert based her story “The BEARS of Syria Planum,” which appeared in our November 2008 issue, “Robo-Buddy,” on Vecna’s real-life ’bot. She even called it “Theo.”
As we learned in that issue, robots that can go where people go and lift things gently are useful for much more than war zones. Meet TransferBot and HomeBEAR. These Vecna robots haven’t been built yet, but they won’t be too different from their battlefield cousin. TransferBot is designed to help move hospital patients who can’t move themselves. HomeBEAR will be a friendly robot-helper for elderly or disabled people who need an extra hand (or two) to get by during the day.
Uh oh, the air’s full of carbon dioxide! Better grab a scrubber and get to work. If you don’t think you can clean the air like you clean a toilet, think again.The carbon scrubber, built by David Keith of the University of Calgary in Canada and his team, is basically a twenty-foot-tall plastic tower on wheels that takes in normal air on one end, sends it through filters soaked with caustic soda -- a chemical that absorbs CO2 -- and spits out clean air on the other end. Keith’s team is still testing where to store all their captured carbon. One idea is to inject it into rocks on the ocean floor, but scientists still aren’t sure what effect that could have on ocean ecosystems.
Scrubbing up carbon is not a new idea. Some carbon producing sources, like power plants, use carbon capture and storage (CCS) technology to soak up extra carbon right where it’s created. Air capture is trickier business. “At first thought, capturing CO2 from the air where it’s at a concentration of 0.04 percent seems absurd,” Keith notes. (Near factories, the concentration is closer to 10 percent.) That’s because if the air scrubber uses too much electricity, it will put just as much carbon back into the air as it takes out!
Eventually, Keith hopes to power his tower with solar panels, which means it won’t produce any CO2 at all. The prototype uses electricity, but only a small amount. In fact, Keith says, for every kilowatt-hour of electricity used to run the machine, the carbon it captures is ten times as much as the carbon emitted to make the electricity. However, Keith’s tower can only capture 20 metric tons of CO2 per year on a single square meter of scrubbing material, which is only as much as one average American produces in that same time period. That’s a pretty good amount of scrubbing, but even used on a massive scale still not good enough to turn global warming around.
There’s a big prize out there for a system that could remove one billion or more metric tons of CO2 per year from the atmosphere for ten years! The prize of 25 million dollars was offered in 2007 by British industrialist Richard Branson and former U.S. Vice President Al Gore.
One carbon-scrubbing tower won’t have much impact on global warming, but it’s one small start! Discovery Channel profiled Keith’s creation on the show Project Earth. You can explore the different parts of the scrubber here:
Your Very Own Jetpack
What’s the coolest way to zip around a city? Driving a car? Riding a motorcycle? Zooming on a skateboard? What about flying with a jetpack?
Jetpacks aren’t just for comic books, video games, and movies any more. In July 2008, New Zealand inventor Glenn Martin proved that people can learn to fly. . .with a little help from some gas turbine powered fans. Martin has been working on his flying machine for 27 years, and finally the Martin Jetpack is ready to go on sale in 2009.
This isn’t the first jetpack ever; the US military built one, called the “Bell Rocket Belt” in the 1950s, but it could only fly for 26 seconds before running out of fuel! That’s perfect for Hollywood stunts, but not very practical. Martin’s pack has the ability to fly for thirty miles in half an hour on a full five-gallon tank of regular gasoline. He still hasn’t taken it higher than a few feet off the ground in demonstrations, but he’s planning higher altitude tests. The controls are two simple joystick-like handles. One controls pitch (up/down) and roll (tilting from side to side) and the other yaw (left/right) and throttle (speed).
It’s still not the safest or most efficient way to get around, but it sure looks like fun. That’s if you don’t mind the noise or having a 250-pound thing the size of a piano strapped to your back. But you’ll have to wait until you’re older to hop in and try a flight. Wannabe pilots have to weigh between 140 and 240 pounds and pass a training course. Then, of course, they have to pull together $100,000. Start saving your pennies!
Home | Back | Print
Virtual Classroom | All About Science | Fun Stuff | Just For Teachers
Current Issue | Subscribe | Contact Us