Contact UsSubscribeCurrent IssueJust for TeachersFun StuffVirtual ClassroomAll About Science nav bar


Science Scoops


Robot Transformers

Imagine an aircraft that morphs as it moves, changing wing shape or nose shape to adjust to conditions. Or a robot that could dissolve into an almost liquid form to flow through a tiny opening, then reconstruct on the other side. At DARPA, the Pentagon’s Defense Advanced Research Projects Agency, researchers are taking the first steps toward this sci-fi supertechnology. The paper-thin robot they came up with, called a “smart sheet,” folds itself into either a boat or paper airplane shape. “Smart sheets are Origami Robots that will [eventually] make any shape on demand for their user,” says Daniela Rus of the Massachusetts Institute of Technology (MIT).

It sounds like magic, but the trick is all science. Rigid, triangular tiles connected with elastomer (stretchable plastic) joints make up the surface of the robot. Tiny motorized switches and other electronics cover the surface, waiting to tell each part how to fold. Tiny magnets hold the pieces together, once they finish folding.

The robot follows a four-step process to create a shape. First, it analyzes a three-dimensional image of the shape it wants to be and works backwards, basically “unfolding” the shape and recording each step. Then, it produces a plan, called an algorithm, for how and when each individual tile will need to fold. Third, the individual plans are spread out over the sheet, and finally the plan is optimized to use the least number of folds possible. “A big achievement was discovering the theoretical foundations and universality of folding and fold planning,” says Rus. Rus and her colleagues have imagined all sorts of creative uses for their technology, including a futuristic Swiss army knife that forms any tool you need.

Your turn! What do you think a smart sheet would be useful for? Send your ideas, complete with illustrations if you’d like, to [email protected] or write to: TRANSFORMERS, ODYSSEY, 30 Grove Street, Peterborough, NH 03458.

Blind Soldier Sees with His Tongue!

Special glasses plus an electrified lollipop equal good news for people living with blindness or other sight impairments. British soldier Craig Lundberg was hit and blinded by a rocket-propelled grenade while serving in Iraq. Now he gets around with the help of a guide dog named Hugo. Recently, Lundberg tried out a new technology called BrainPort, which makes it possible to sense basic shapes or even letters on the tongue!

Lundberg puts on stylish sunglasses with a camera mounted between the eyes, and sticks a plastic “lollipop” on his tongue. The camera sends images to a hand-held device that looks sort of like a remote control. The device translates black, white, and gray pixels into electrical signals that get sent to the tongue. An object gets translated as a strong sensation, while background has no stimulation at all.

“It’s like licking a battery. . .electrical, tingly,” Lundberg told BBC news. As weird as it feels, the device works! “You get lines and shapes of things; it sees in black and white so you get a two-dimensional image on your tongue,” Lundberg explains. With the BrainPort in place, he was able to read the top line of an eye chart, and to reach out and pick up objects without fumbling around. Learning to figure out how the tongue tickling translates to sight takes some training, but Lundberg feels like it’s worth it. “I am a realist. I know this isn’t going to give me my sight back, but it could be the next best thing.” But, he plans to keep his guide dog just the same!

Mind-Reading Machine

Can’t speak? No problem. You can type words with your brain! Just concentrate on the alphabet letter you want, and it will appear. Computer-mind communication sounds freaky, and it’s still a long way from perfect. But researchers at the Mayo Clinic in Florida have proved that brain waves can be translated into the alphabet.

This exciting news could mean great things for patients with diseases that limit movement to the point that speech or other communication is next to impossible. Examples include Lou Gehrig’s disease, spinal injuries, and “locked-in syndrome.” This scary disease means that you’re awake and aware, but only your eyes can move!

To try out this new technology, you need special surgery, called a craniotomy — a cut that goes through your skull — in order to get electrodes placed directly on the surface of your brain. Obviously, the researchers couldn’t just start doing brain surgery on test subjects. So they worked with two epilepsy patients who already had electrodes in place for monitoring seizures. Reading brain waves from electrodes placed directly on the brain is much more accurate than trying to read signals from sensors on the scalp, because bones and skin can interfere with the signals. But most mind-reading research has been done with sensors on the skin, not on the brain. “That’s why progress to date on developing this kind of mind interface has been slow,” explains lead researcher Jerry Shih.

To use the mind-reading machine, a patient looks at a computer screen with a six-by-six grid of letters. First, he concentrates on each letter, one at a time, while a computer records his brain waves. This step enables the computer to learn and remember how an individual’s brain waves match to the letters. Once the calibration is complete, the patient can start typing –without touching anything! “We were able to consistently predict the desired letters for our patients at or near 100 percent accuracy,” Shih says. Now, researchers just have to work on reading what the patients really think about the hospital food. . .but that will be a bit trickier!

Robot Arm of the Future

Luke Skywalker had a robotic arm so real you’d never know it was metal. . .unless Darth Vader tried to slice it off again. This science fiction prosthesis, or artificial limb, was the inspiration for inventor Dean Kamen’s newest technology: the Luke Arm. Kamen completed the project thanks to an assignment and funding from the Defense Advanced Research Projects Agency (DARPA).

The challenge: Create a robotic arm prosthesis, completely self-contained, that can pick up a grape without squishing it or a raisin without dropping it, and make it weigh less than nine pounds. At first, Dean Kamen thought, “They’re nuts. They’ve been watching too much Terminator.” But after a little bit of research, he started to understand how much the world of prosthetics needed to advance. A hundred years ago, he explained, we gave a wounded soldier a wooden stick with a hook on it. “Now, we give him a plastic stick with a hook on it.” Most amputees rarely wear their prosthetics. The devices simply aren’t comfortable and don’t really help with most everyday tasks.

Kamen had his work “in hand”! At the end of only one year, he had an arm that could do everything DARPA wanted. The four most important aspects of the Luke Arm are weight, modularity, motion, and controls. Lightness of weight is important so the device is comfortable and easy for people of all different sizes to carry. Modularity means that the arm can be adapted to people with different degrees of amputation. If you only need a hand, the hand comes off and works by itself. If you need everything up to the elbow, you add on a few more pieces to the hand. Motion is extremely important. Older prosthetics had only three degrees of motion, while a real arm has 22! The Luke Arm has 18; it can even reach upwards. Chuck Hildreth, who was chosen to test out the Luke Arm, lost both of his own arms when he was electrocuted as a teenager. Wearing the new arm, Chuck picked up grapes, poured himself a drink, ate cereal and milk with a spoon, and stacked paper cups. Hildreth said in a video interview with IEEE Spectrum Online, “I can’t wait to get one of these. Actually my wife can’t either. She says to me, ‘I’ve got a lot of stuff for you to do around the house.’”

How does Chuck control the arm? He uses sensors in his shoe. Pressing with his big toe moves the arm out. Other toes rotate the wrist or control the grip. Feedback from the arm is sent to a sensor on Chuck’s side — the strength of the vibration he feels there tells him how strongly he’s gripping an object. But mechanical controls like this are only one possibility. Kamen is also working with other researchers who have demonstrated the ability to attach sensors to patients’ own nerves and tissue so they can control prosthetic devices with their brains!

Skinput: Your Skin Is the Keyboard!

You’re walking to school, when your iPod shuffles to a song you just heard. Imagine snapping your fingers to skip ahead. Or tapping your wrist to turn up the volume when your favorite tune comes on.

Brand new technology called Skinput makes all of this possible. Just like the name implies, this invention projects a keypad onto your skin so you can input information into a mobile device like an iPod or cell phone. You won’t be able to buy a Skinput device for at least a few more years, but the prototype is creating a huge buzz in the mobile computing world.

You want your cell phone or iPod small and easy to carry. But at the same time, a tiny keyboard or screen feels cramped and frustrating, especially if you’re trying to use your phone to surf the Internet or send emails. Developers have tried projecting keyboards onto tables, but there isn’t always a convenient table nearby when you need to use your phone. “What’s great about skin, unlike tables, is that it travels with us,” says inventor Chris Harrison, a Ph.D. student at Carnegie Mellon University in Pennsylvania. The solution, as Harrison and his colleagues see it, is to separate the device and the input, using the human body to communicate. “We spent a lot of nights in the lab tapping on our arms and wondering if this would ever happen,” Harrison told CNN news.

Here’s how it works: When you tap your skin or snap your fingers, the action sends a tiny wave of vibration and sound down your arm and through your bone. Try it. Tap your hand but pay attention to your forearm. Do you feel it vibrate slightly? This vibration is somewhat different depending where you tap. When you strap Harrison’s prototype above your elbow, a tiny projector displays buttons on your skin. Sensors take a few minutes to adjust to your particular arm. Then they can tell where you tapped by listening to the vibrations, and send the information to a phone or other device. Right now, Skinput is only accurate with five buttons. It would need to have ten or more to work as a keyboard.

The great thing about using the body as a keyboard is that you’re already familiar with it. A sense called proprioception lets you accurately touch your nose, knuckle, or knee without looking. Once you learned which spots on your body to tap to run your device, you’d never have to see what you were doing to do it again.

Of course, if you wanted to play Tetris on your arm by tapping to rotate the blocks, you’d probably want to look!

Memristor Discovery

Watch out silicon valley, there’s a new kind of memory on its way! Memristor stands for “memory resistor.” Transistors, resistors, and capacitors are all fundamental building blocks of electrical systems described in electrical engineering textbooks. Memristors, however, always have been a strange fourth cousin. Leon Chua of the University of California at Berkeley showed they were theoretically possible in 1971, but a true memristor had never been both demonstrated and identified in reality, until now.

Dmitri Strukov and colleagues at HP Labs in Palo Alto, California published their discovery in the April 2008 issue of the journal Nature. They are currently building and testing memory devices based on memristors made from a titanium dioxide sandwich. Between two metal wires are two layers of titanium dioxide. The bottom layer is pure, but the top layer is missing some of its oxygen, leaving little atomic holes in the material. When electricity flows through the top wire, these holes get pushed into the bottom layer. The side with the holes offers less resistance to electric current (more electric flow can get through it at one time). This process of pushing the holes back and forth can be repeated again and again, in effect switching the memristor between “1” and “0,” the essential building blocks of all computer memory.

Most importantly, when the power is shut off, the memristor “remembers” its state. This means that a computer using memristor memory would most likely never need to be rebooted. You could turn the machine off with all your windows open, and then turn it back on with every window in the exact same place you left it. Flash and hard disk memory are also non-volitile (they don’t forget) — which is why your files are still there in storage when you turn the computer back on or insert your flash disk — but these memories are slow. When working on your computer, you’re using random access (RAM memory), which is very fast, but “forgets” everything if you lose power. Memristance could provide non-volatile, speedy memory.

The reason memristors went undiscovered for so long is simply that their significance is proportional to the size of a device. The tinier the circuit, the more memristance matters. HP Lab’s memristors are built on the nanoscale: about a thousand times tinier than the width of your hair. And as devices get even tinier, memristance only gets stronger.

Once upon a time, it took a machine the size of your refrigerator to store just five megabytes of information! Now, you can carry around several gigabytes clipped to your keychain. Memristance may eventually allow researchers to squeeze terabytes into a keychain-friendly space, ushering in a whole new frontier in computing.

Life from Space Dust

Bullet-sized particles bit through the spacecraft Stardust’s shield as it fought through the jets of comet Wild 2 (pronounced “Vilt-2”). The mission? To collect comet dust and gas and bring it back to Earth. NASA scientists hoped for evidence to support the theory that some of life’s ingredients formed in space, hitching rides on comets and meteors that pelted Earth.

Stardust survived its comet encounter, and a collection capsule packed with extraterrestrial dust returned to Earth in January 2006. Right away, it was clear that the dust contained an amino acid called glycine. “Glycine is an amino acid used by living organisms to make proteins, and this is the first time an amino acid has been found in a comet,” said Jamie Elsila of NASA’s Goddard Space Flight Center in Maryland. Amino acids have already been found in meteorites.

But before NASA could announce the discovery, they had to make sure that the space dust sample wasn’t contaminated. “It was possible that the glycine we found originated from handling or manufacture of the Stardust spacecraft itself,” said Elsila. Careful analysis revealed that the glycine contained Carbon-13, a special form of the carbon atom that is much more common in space than on Earth. “[This discovery] strengthens the argument that life in the universe may be common rather than rare,” said Carl Pilcher, director of the NASA Astrobiology Institute.

A “School” Full of Zebrafish!

What do your teachers do over summer vacation? In Rochester, Minnesota, teachers from the Lincoln K8 Choice Public School spent the summer studying zebrafish. During the school year, their students will be doing some “fishy” scientific research of their own. Dr. Stephen Ekker, a biochemist and molecular biologist at the Mayo Research Clinic, also in Rochester, is trying to change the way kids learn science. “Instead of trying to bring scientists in to teach a class or convincing teachers to become world-class scientists, we happened across a compromise,” says Ekker. The idea is for kids to do real research in science class, and then connect what they’re learning in science to the rest of the school day and their everyday lives, too.

Ekker brought together Lincoln school teachers specializing in all different subjects at the Mayo Clinic to develop zebrafish modules, specific experiments designed to collect data about the little striped fish. Lincoln students will also use the zebrafish in lessons in reading, writing, and history. The Zebrafish Core Facility, a genetics lab at the Mayo Clinic, will provide the adult and embryo fish that the students need to do their projects. These kids may even make some totally new scientific discoveries!

Why study zebrafish? “We share 75 percent of our genome with the zebrafish, and the fact that their development occurs fully visibly outside the mother allows us to learn much about genetics and development [from them],” says Ekker. The kids will research different genetic strains of the fish, and some classrooms will even get their own fluorescent microscopes. The teachers thought this specialized equipment would be way too expensive to use in the classroom, but that didn’t stop Ekker. “He’s been cobbling together microscopes for them out of spare parts,” says Elizabeth Zimmerman, spokesperson for the project.

Fluorescent microscopes — Microscopes that shine ultraviolet light on a material that either fluoresces (glows) naturally or has been colored with fluorescent dye so that it will glow.

It’s a T-shirt! No, It’s a Camera!

Cameras are everywhere — in cell phones, computers, high-security buildings and parking lots. But someday, instead of tossing a slim camera phone in your pocket, your pocket might be the camera, thanks to a recent breakthrough by researchers at the Massachusetts Institute of Technology (MIT) in Cambridge, Massachusetts. Engineering Professor Yoel Fink and his team managed to use a web of fibers to take a picture of a smiley face. The picture was black and white and kind of blurry, but “this work constitutes a new approach to vision and imaging,” says Fink. It’s the first time anybody has managed to take a picture with a mesh of fibers rather than a lens. The problem with lenses like the ones in our eyes and all modern cameras is that they can easily be damaged causing blindness. A camera made of special fibers woven into fabric is much more durable — if one fiber is damaged, the others can still “see.”

To create their special optoelectric fibers, Fink and his team first form a tube out of layers of light-detecting materials. Then they heat the tube in a furnace and carefully draw out super thin strands — only two or three times the width of your hair. These thin strands retain the same structure as the original tube, but are much, much smaller. The smaller the strand, the higher and sharper the final image will be.

Inside the individual strands, two layers of semiconductors measure light intensity and wavelength. Adding a third layer could theoretically allow the strands to detect color. Weaving the fibers into fabric allows the flexible surface to “see” and send electrical signals to a computer, which focuses all of the separate strands’ information into an image. “While the current version of these fabrics can only image nearby objects, it still can see much farther than most shirts can,” says Fink. He’s got that right!

Optoelectric — Combining visual and electric functions

Semiconductors — Materials that allow electricity through better than insulators, but not as well as conductors

Robot Scientist

Look out, Einstein, a robot wants your job! This robot, named Adam, may be the first non-human to ever independently think up and test hypotheses in order to discover new scientific knowledge. Adam’s discoveries so far all have to do with yeast genetics — not nearly as mind-blowing as Einstein’s theories on relativity, but still impressive when you realize that nobody told Adam which yeast genes to study. “Adam makes up its own mind what to do,” Ross King of Aberystwyth University in the United Kingdom, the robot’s creator, told CBC News in Canada. “It decides what experiments to do, what to test.”

Of course, Adam’s choices are limited by the information King feeds it and the lab equipment it has access to. The physical robotics system includes lots of microplates (for growing yeast cultures), robotic arms, incubators, a freezer, liquid dispensers, fans, and other equipment useful for biological research. Adam also has loads of data on yeast and other organisms. To decide what to do, Adam finds a place where the yeast genetic data is incomplete, then searches for complete information about similar genes in other organisms. By comparing all of this data, Adam is able to form a hypothesis. . .and start experimenting. Adam can begin up to 1,000 new experiments each day!

Why yeast? Biologists use this simple organism as a model for more complex ones, like human cells. So far, Adam has figured out the functions of 12 different yeast genes. When King and his team tested Adam’s results manually, everything was correct. Eventually, Adam will be able to move beyond yeast — as long as King uploads the data necessary for new experiments. King’s team has also built a new robotic scientist named Eve. This robot will screen new drugs for diseases like malaria.

Robotics has been useful in scientific laboratories for a long time, but usually the machines just do the work and generate data that humans have to sort through. This is the first time a robot has not only designed its own experiment, but determined its own results! Still, modern Einsteins shouldn’t worry about being replaced — robot scientists like Adam are much more likely to be lab assistants than brilliant theorists.

Your turn! What do you think a robot scientist should look like? Email your drawing to [email protected] or write to: ROBO-ART, ODYSSEY30 Grove Street, Suite C, Peterborough, NH 03458.

Miraculous Camera for the Blind

Elizabeth Goldring has been blind most of her adult life. She can sense only dark and light with one eye, and has very limited vision in the other. But she can surf the Internet, look at digital pictures of her family, and even take photos with a special device she helped develop as a senior fellow at Massachusetts Institute of Technology’s (MIT) Center for Advanced Visual Studies.

The story of Goldring’s “seeing machine” starts over 20 years ago, with a visit to her optometrist. He hooked her up to a device called a scanning laser ophthalmoscope (SLO) for a routine test. The machine projected an image directly onto Goldring’s retina, and she could see it!

“I asked if they could write a word, and they wrote the word ‘Sun,’” Goldring says. “It was the first word I’d seen for many months,” since her degenerative eye conditioned had worsened. She knew immediately that she had to find the machine’s inventor and figure out a way to share this technology with other visually impaired people.

The biggest problems in reaching that goal were price and size: a medical SLO is quite large and costs about $100,000. The SLO’s inventor, Rob Webb of the Schepens Eye Research Institute at Harvard University in Boston, Massachusetts, collaborated with Goldring and a team of MIT students to develop the current prototype. It can be made for under $500 and carried around in one hand. The “seeing machine” can be hooked up to any technology with a visual feed, such as a computer, video camera, or digital camera. The visual information travels to a liquid crystal display (LCD) screen within the seeing machine. Then, light-emitting diodes (LEDs) project the image onto a special lens that focuses it into a tiny spot of light on the retina.

The seeing machine won’t work for every visually impaired person — the retina has to be functional in order for the images to be processed by the brain — but for those with impairments like Goldring’s, who may be forgetting what it’s like to see a loved one’s face, such a machine would truly be a miracle. Goldring says, “I can’t believe that this eye that sees nothing can look into this machine and clearly see an image.”

. . .And Speedy, Hot Pink Submarines

“Welcome home, seafaring robot!” Scientists in Australia celebrated the successful first voyage of SG-154, a remote-controlled submarine that can dive down as deep as half a mile to measure and transmit data on currents deep below the ocean’s surface. The first mission, though, was less about measuring and more about remote-controlled diving practice. Instead of using a motor to dive, the submarine moves in a vertical zig-zag like a porpoise. Winged gliders help keep the submarine on course, and an oil-filled chamber inflates to handle pressure changes. “[SG-154] doesn’t have any propulsion to help it move forward or backwards — it just glides. So if the currents are too strong it can be a real problem,” says Ken Ridgway, senior researcher for Commonwealth Scientific and Industrial Research Organization (CSIRO) in Australia. The pink porpoise robot’s next mission: to measure ocean currents and conditions. The real-time ocean data that SG-154 measures can be used for everything from planning shipping routes to forecasting the weather.

Half Bike, Half Laudromat!

Next time you ride your bike down the street, think about all the energy building up as you spin those pedals. What else could your bike’s energy be used for? I bet your first thought isn’t to do laundry. But wait until you see the bike-pedal-powered washing machine a team of Massachusetts Institute of Technology (MIT) students and staff designed and built.

In places where people have to carry water by hand to wash clothes in buckets or in a river or stream, the simple chore of doing laundry can take eight hours per load. And the process adds to water pollution. Fancy, high-powered washing machines don’t help if you have no money to buy them and have no place to plug them in. The MIT team’s challenge was to make a washing machine from spare parts that could run without electricity. The result? The “bicilavadora,” a name combining the Spanish words for “bicycle” and “washing machine.”

The team took their prototype to an orphanage in Ventanilla, Peru. With the orphanage’s 670 kids, there were plenty of clothes in need of washing! The test, however, had a problem: Some water leaked out around the edges, which could cause the outer metal barrel to rust. But the team is confident that they can make a more robust machine with only a few changes.

The outer barrel of the bicilavadora is made from pieces of an old metal oil barrel. The clothes go into an inner drum made from special plastic panels designed by graduate student Radu Raduta. A gear on the outside of the drums connects to a bike chain and frame. “It uses a standard mountain bike gear range,” explains Gwyndaf Jones, the instructor who led the trip to Peru. “The highest gear is the spin cycle, and the lowest gear is the wash cycle.” All you have to do is fill up the inner barrel with soap and water, then close the machine and start pedaling! Holes in the plastic inner barrel allow soap and water to flow in and out during washing and rinsing. After the water is all drained out, the wet clothes whip around like lettuce in a salad spinner. The almost-dry clothes are then hung on a clothesline to dry completely. The washing process is quick (about an hour) and allows plenty of time for more important work, like keeping 670 kids at an orphanage entertained.

Think about it! What other cool things could you make using the parts from a bicycle? Draw a picture of your invention and explain what it can do. Email your response to [email protected] or write to: BIKE INVENTION, ODYSSEY30 Grove Street, Suite C, Peterborough, NH 03458.

Robust— Strong or long-lasting

Uncle Sam Wants You

Hang out in the Alienware computer gaming area and play high-speed Internet military games like Tom Clancy’s Ghost Recon: Advanced Warfighter 2 or Call of Duty. Better yet, go to the simulator room and join one of three missions that bring to life authentic battle scenarios. In fact, sit in a real Humvee and “fire” on the enemy projected on a 15-foot-high battleground scene complete with surround-sound effects. Yes sir, Uncle Sam Wants You! To get you, the U.S. Army has invested $12 million dollars in a state-of-the-art, one-of-a-kind army recruitment facility.

Located next to a Banana Republic store in the busy Franklin Mill Mall in Philadelphia, the U.S. Army Experience Center (AEC) is a two-year experiment. It’s a playground atmosphere that is all military. In addition to the gaming area, you can find a Tactical Ops Center, a Career Navigator, a lounge, and even a café. You can reserve an area of the 15,000-square-foot Center for clubs’ or educators’ meetings. And there’s no hard-core sales pitch — kids 13 and up can play. However, just in case you’re thinking about joining the U.S. Army, you’ll find recruitment officers in polo shirts on duty to answer your questions.

Even with this low-pressure approach, the Center has its critics. Some soldiers wonder if the use of video games glamorizes war and presents an unrealistic view of what it feels like when people get killed. John Grant, an Army veteran and member of the Philadelphia Chapter of Veterans for Peace says, “They’re using $12 million of taxpayer money to sell militarism to kids using video games, to brand the military in a positive way. This is an unfair recruiting method. Where is there in the Center something that shows veterans wracked with post-traumatic stress syndrome? Video games cannot simulate real combat.” But Pete Geren, Secretary of the Army, disagrees. In a press release announcing the Center’s opening, he said, “Potential recruits are afforded a unique opportunity through the Army Experience Center to learn what it means to be the best-led, best-trained, and best-equipped Army in the world by allowing them to virtually experience multiple aspects of the Army.”

Who Needs Blood and Gore?

Think of your favorite video game. What makes it fun to play? If you’re anything like most video game players, it’s not the blood (if there even is any in your favorite game!). Instead, your answer might be: “I like when I manage to beat a hard boss” or: “I love leveling up my character and choosing new skills.” That’s right, feeling in control or victorious, having lots of choices, and choosing the best strategies are much more important to most gamers than the amount of blood and guts, according to a series of two surveys and four studies by psychologist Richard Ryan of the University of Rochester in New York.

In three small experimental studies, Ryan and his team programmed different levels of violence into popular games. In a group of 36 male and 65 female college students, half played the original, violent version of Half-Life 2 and the other half destroyed their enemies in a much happier way. “Instead of exploding in blood and dismemberment, they floated gently into the air and went back to base,” Ryan told Science News. A different group of 39 male gamers (mostly around 19 years old) played The House of the Dead III set to either high violence (spouting blood) or low violence (green goo). The result? It’s not the goo or the blood that’s thrilling; it’s the feeling of victory.

In his research, Ryan didn’t forget to check for people who tend to be more hostile and angry in day-to-day life. They must like more blood, right? Wrong! Subjects who got high scores on psychological tests of aggression tended to prefer games advertised as violent, but when they actually played less violent versions of those games, they reported having just as much fun.

Although none of the studies involved kids, Ryan thinks this is good news for video game makers, parents, and players. Games don’t have to be bloody to be lots of fun!

Your turn! What’s your favorite video game and what makes it fun? Email your answer to [email protected] or write to: JUST FUN, ODYSSEY, 30 Grove Street, Suite C, Peterborough, NH 03458.

Virtual You

It sure is fun to be a wizard, superhero, or mad scientist in a virtual world like Teen Second Life or World of Warcraft. But no matter how hard you try to create an avatar who’s nothing like you, the way you play with that avatar has a lot to do with how you play in real life. If you’re a girl, you’ll most likely want to socialize with other characters. If you’re a boy, you’ll probably look for more fast-paced action games.

It’s pretty obvious that boys and girls tend to play differently, but it might be a surprise to learn that this is true across cultures, and even in virtual worlds where you’re trying to pretend to be someone else! Psychologists at Georgetown University in Washington, D.C. let 126 fifth graders loose in a MUD. No, that doesn’t mean squishy dirt; it stands for Multi-User Domain, a fancy term for a virtual world.

In this particular MUD, the kids got to pick a name, sex, and costume for their avatars. They could be a normal kid in a T-shirt and jeans, a punk kid in a leather jacket, a soccer player, a firefighter, or a wizard. Using a computer mouse, the kids could then switch between background scenes, make their characters move, change facial expressions, and talk in speech bubbles.

The researchers were interested in something called gender-bending, when a girl chooses a boy avatar or vice versa. Only 13 percent of the fifth graders were gender-benders, and they were more likely to have fun with opposite-gendered avatars if they were sitting and playing together in a room with a friend of the same sex. When girls and boys who knew each other tried to play together, they often had trouble agreeing on what kind of game to play. The girls wanted conversation games; the boys wanted action games. This isn’t a bad thing, though. “MUDs can provide a virtual play space for preadolescent children to discover who they are,” Sandra Calvert, one of the study’s authors, told Science News. I wonder if any kids discovered that they wanted to grow up to be real wizards?

Do you spend hours playing in virtual worlds? What kinds of games do you play, and what was the craziest avatar you ever created? Email your name and a description of your avatar to [email protected] or write to: MY AVATAR, ODYSSEY, 30 Grove Street, Suite C, Peterborough, NH 03458.

Are Wii Having Fun Yet?

Want to help science by playing your Wii? Head on over to Rice University in Texas where professors Marcia O’Malley and Michael Byrne are testing the Wiimote’s motion capture abilities to learn about, well, learning. They plan to record people as they play games using different motor skills, such as swinging the Wiimote as a virtual tennis racquet to hit a virtual ball, and analyze what happens as subjects get better at the game. Eventually, the data O’Malley and Byrne collect may be useful for creating something like a robotic sleeve that helps you improve your tennis game by gently guiding you to fix your swing. The pair has earned a National Science Foundation (NSF) grant to fund their research for the next three years.

This project follows up on O’Malley’s previous work developing a computer system using a joystick to help stroke victims recover simple motor skills. When the user makes a wrong move, the joystick resists the motion, guiding the hand along the right path.

O’Malley and Byrne are now interested in more complex motor skills and in three different types of learners: “experts” who learn a new motor skill at a steady pace until they figure it out; “novices” who learn at the same pace but may never figure it out; and others, who O’Malley says, “start off awful, but somewhere in the middle of training . . .suddenly ‘get it.’” It’s Byrne’s job, as a specialist in computer-human interaction, to figure out when, where, and how that “I get it!” moment happens. He’ll do that by analyzing computer data on the range of motion used in performing a motor skill. The experimenters hope to then use their results to help people learn the skill faster, with less trial and error. “Using the Wii will be a great way to recruit subjects,” says O’Malley. “We can say, ‘Hey, kids, come play some games!’”

Teddy BEAR to the Rescue!

It may have a head like a teddy bear, but the Battlefield Extraction Assist Robot (BEAR) is much more than a friendly face. It can carefully pick up a wounded soldier, then squeeze through doorways, climb stairs, and zoom across smooth surfaces on wheels to carry the soldier to safety.

The robot’s strange-looking lower body helps it switch between different kinds of motion. It can stand up on its “tiptoes” at its full six-foot height to walk over rough terrain, or it can fold its legs down into a tread, like that of a tank, to travel quickly. When it needs to pick something or someone up, it gets down on its belly and slides its arms underneath like a forklift. It can lift about 500 pounds in one fluid motion, thanks to its hydraulic system.

Vecna Technologies developed the first BEAR prototype in Cambridge, Massachusetts, in 2007. “We saw a need for a robot that could essentially go where a human can,” says Daniel Theobald, Vecna’s president. According to Theobald, the BEAR can’t think like a human, yet. The current prototype is like a giant remote-control robot. It has cameras and microphones so its controller sees what it sees and hears what it hears in order to lead it across a battlefield. Eventually, Vecna plans to build a BEAR that is autonomous.

Does this all sound more than vaguely familiar? That’s because fiction writer Angie Smibert based her story “The BEARS of Syria Planum,” which appeared in our November 2008 issue, “Robo-Buddy,” on Vecna’s real-life ’bot. She even called it “Theo.”

As we learned in that issue, robots that can go where people go and lift things gently are useful for much more than war zones. Meet TransferBot and HomeBEAR. These Vecna robots haven’t been built yet, but they won’t be too different from their battlefield cousin. TransferBot is designed to help move hospital patients who can’t move themselves. HomeBEAR will be a friendly robot-helper for elderly or disabled people who need an extra hand (or two) to get by during the day.

Scrubbing Air

Uh oh, the air’s full of carbon dioxide! Better grab a scrubber and get to work. If you don’t think you can clean the air like you clean a toilet, think again.

The carbon scrubber, built by David Keith of the University of Calgary in Canada and his team, is basically a twenty-foot-tall plastic tower on wheels that takes in normal air on one end, sends it through filters soaked with caustic soda — a chemical that absorbs CO2 — and spits out clean air on the other end. Keith’s team is still testing where to store all their captured carbon. One idea is to inject it into rocks on the ocean floor, but scientists still aren’t sure what effect that could have on ocean ecosystems.

Scrubbing up carbon is not a new idea. Some carbon producing sources, like power plants, use carbon capture and storage (CCS) technology to soak up extra carbon right where it’s created. Air capture is trickier business. “At first thought, capturing CO2 from the air where it’s at a concentration of 0.04 percent seems absurd,” Keith notes. (Near factories, the concentration is closer to 10 percent.) That’s because if the air scrubber uses too much electricity, it will put just as much carbon back into the air as it takes out!

Eventually, Keith hopes to power his tower with solar panels, which means it won’t produce any CO2 at all. The prototype uses electricity, but only a small amount. In fact, Keith says, for every kilowatt-hour of electricity used to run the machine, the carbon it captures is ten times as much as the carbon emitted to make the electricity. However, Keith’s tower can only capture 20 metric tons of CO2 per year on a single square meter of scrubbing material, which is only as much as one average American produces in that same time period. That’s a pretty good amount of scrubbing, but even used on a massive scale still not good enough to turn global warming around.

There’s a big prize out there for a system that could remove one billion or more metric tons of CO2 per year from the atmosphere for ten years! The prize of 25 million dollars was offered in 2007 by British industrialist Richard Branson and former U.S. Vice President Al Gore.

One carbon-scrubbing tower won’t have much impact on global warming, but it’s one small start! Discovery Channel profiled Keith’s creation on the show Project Earth. You can explore the different parts of the scrubber here:

Your Very Own Jetpack
February 2009

What’s the coolest way to zip around a city? Driving a car? Riding a motorcycle? Zooming on a skateboard? What about flying with a jetpack?

Jetpacks aren’t just for comic books, video games, and movies any more. In July 2008, New Zealand inventor Glenn Martin proved that people can learn to fly. . .with a little help from some gas turbine powered fans. Martin has been working on his flying machine for 27 years, and finally the Martin Jetpack is ready to go on sale in 2009.

This isn’t the first jetpack ever; the US military built one, called the “Bell Rocket Belt” in the 1950s, but it could only fly for 26 seconds before running out of fuel! That’s perfect for Hollywood stunts, but not very practical. Martin’s pack has the ability to fly for thirty miles in half an hour on a full five-gallon tank of regular gasoline. He still hasn’t taken it higher than a few feet off the ground in demonstrations, but he’s planning higher altitude tests. The controls are two simple joystick-like handles. One controls pitch (up/down) and roll (tilting from side to side) and the other yaw (left/right) and throttle (speed).

It’s still not the safest or most efficient way to get around, but it sure looks like fun. That’s if you don’t mind the noise or having a 250-pound thing the size of a piano strapped to your back. But you’ll have to wait until you’re older to hop in and try a flight. Wannabe pilots have to weigh between 140 and 240 pounds and pass a training course. Then, of course, they have to pull together $100,000. Start saving your pennies!

Skyscraper Farms
November 2008

Imagine a glass building reaching to the sky in the middle of a bustling city. Inside, an elevator takes you up, not to stores, office buildings, or apartments, but to rows of tomatoes, beans, or pumpkins. In May 2008 at the first ever World Science Festival in New York City, Columbia University professor Dickson Despommier presented his Vertical Farming ideas. He imagines a future where glass-walled skyscraper farms use natural sunlight and recycled wastewater to grow crops right in the middle of cities.

Right now, most of our food grows on acres of flat farms, which take up precious space and are vulnerable to natural disasters, pests, and seasonal weather. Meanwhile, the human population continues to increase and crowd into cities. There will most likely be about 9.2 billion people in the world by 2050! There simply isn’t enough room on our planet for regular farms to support all those human lives.

Vertical farming could feed all those people using much less land. One indoor acre equals about four to six or more outdoor acres, depending on the crop. Thirty acres worth of strawberries, for example, can be grown on one indoor acre! That’s because Despommier designed his 21-story skyscrapers to produce more energy than they consume. There’s no need for pesticides or chemicals, and the farms can recycle sludge from wastewater as topsoil (see the March 2008 “Poop! What a Waste” ODYSSEY), recirculate water to feed the plants, and produce energy by composting non-edible plant parts. Returning all these extra acres of traditional farmland to a wild state would combat global warming: More trees and shrubs would absorb carbon dioxide from the atmosphere.

If vertical farms were a part of every city, tons of money and energy would be saved on transportation costs, and city kids would grow up with more fresh veggies and a better understanding of where food comes from. Despommier estimates that 150 of his buildings could feed New York City for a year. The cost of building one vertical farm would be about $84 million, according to Despommier, but it would cost only $5 million to run the farm per year, which would bring in as much as $18 million in profit, based on the rising prices of produce.

Don’t Look Down!

Actually, looking down is the whole point of the skywalk, a glass bridge jutting out 70 feet over the western edge of the Grand Canyon in Arizona. How far is it to the Colorado River at the bottom? Well, consider that three Empire State Buildings piled on top of each other would fit under the skywalk, with some room to spare!

Founder David Jin, who raised $30 million to build the structure, says, “my vision was to enable visitors to walk the path of the eagle.” His vision wouldn’t have been possible without the support of the Hualapai (wall-uh-pie) Native American tribe who allowed the skywalk to be built on their land. Sheri Yellowhawk, CEO of Grand Canyon West, a tribally owned tourism company, explained the decision: “When we have so much poverty and so much unemployment, we have to do something.”

The skywalk offers views of the canyon all around, and directly down through five layers of four-inch-thick clear glass. Visitors have to wear special booties to keep their shoes from scratching its surface. The skywalk is the first engineering feat of its kind in the world, and half a million people have walked along the U-shaped path through the air since it opened in March 2007.

Mark Johnson, architect for the project, planned and tested the skywalk very carefully. Several hundred people can safely walk on it at once, though tour groups are limited to 120. Shock absorbers keep it from rocking like a diving board. The skywalk’s structure was built on solid ground first; the glass floor rests within its million-pound steel frame! It wasn’t possible to use a crane in the rocky terrain above the canyon, so builders used trucks with winches to roll the structure out over the edge at the glacial rate of an inch a minute. Then, workers secured the bridge in place with almost a hundred steel rods that were bored deep into the limestone like nails.

Would you be too scared to look down? You’re not alone! But if you do some day work up the courage to step out over the edge, you’ll join NASA astronaut Edwin “Buzz” Aldrin, who was included in the first group of humans to walk above the canyon. He was the second person to walk on the moon!

Your Cell Phone’s Invisible Powers

Cell phones transmit electrical signals. So does your brain. What happens when the phone is against your ear, just inches from your brainwaves? Could the phone’s transmissions affect your health, mood, or thoughts? To be safe, all cell phones sold in the U.S. must have a specific absorption rate (SAR) of less than 1.6 W/kg (watts per kilogram). The SAR rating measures how much radio-frequency energy gets absorbed by your body while chatting on your cell.

Two recent studies put cell phones to the test. Both used Electroencephalographs (EEGs) to measure brain activity of volunteers who had cell phones strapped to their heads. The first study, led by Rodney Croft, of the Brain Science Institute, Swinburne University of Technology in Melbourne, Australia, included 120 healthy men and women. A computer controlled and recorded transmissions to the cell phones so that neither the researchers nor the volunteers knew when the phones were on (actively transmitting) and when they were idle. This is called a double-blind experiment, and allowed the EEG data to speak for itself. What did the data say? While the phones were transmitting, the alpha waves in the subjects’ brains were stronger than while the phones were idle. And most of these boosted alpha waves occurred in the brain tissue closest to the cell phone! Alpha waves are a certain pattern of brain activity that reflects how aware you are. The stronger your alpha waves, the more likely it is you’re daydreaming, or falling asleep.

Speaking of sleep, the second study was led by James Horne and colleagues at the Loughborough University Sleep Research Centre in England. Ten male volunteers were restricted to 6 hours of sleep, and came in at weekly intervals to lie in a soundproof, lit bedroom with a silent phone beside their heads. The phones were randomly set to talk, listen, standby, or idle for half an hour. EEGs were recorded, and the volunteers (who had no idea whether the phones were on or off) reported how sleepy they felt. Afterward, the phones and the bedroom lights were switched off, and the subjects got to rest for 90 minutes while researchers continued to monitor their brainwaves.

The results showed that subjects exposed to talk mode took twice as long to fall asleep, and their delta waves (a brainwave pattern that strengthens in stage two sleep) stayed low for up to an hour after the phones were shut off. Talk mode had the highest SAR of any mode in the experiment, 0.133 W/kg, but that is still way below the safe limit set by the Federal Communications Commission. There’s no need to stop talking to your friends before bed, though. The effects measured in this study are about the same as drinking half a cup of coffee, and are not at all dangerous. Lots of factors much stronger than cell phones affect your sleep every night!

Horne told Scientific American (May 2008 issue) “these findings open the door by a crack for more research to follow. One only wonders if with different doses, durations, or other devices, would there be greater effects?”

Who’s in that Cave?

He lives in a cave, floats in midair, and you can walk right through him! No, we’re not talking about a ghost. We’re describing the world’s first anatomically correct virtual human, named “CAVEman” by his creators. His “CAVE” (CAVE — Automatic Virtual Environment) is a cube-shaped virtual reality room. Three of the walls and the floor project a model of a human body in four dimensions, including time. Put on your electronic shutter glasses, grab a “wand” (a special kind of joystick that works a lot like a computer mouse) and you’re ready to meet CAVEman.

What do you do with a virtual human? Christoph Sensen and his colleagues at the University of Calgary in Canada designed CAVEman as a tool for studying disease — it’s like an anatomy textbook come to life. You can walk around or through the whole body, or zoom in using the wand until a single blood vessel seems as thick as your arm.

Remember those four dimensions? CAVEman doesn’t just sit there; his body changes over time. Researchers can feed the CAVEman computer programmed genetic data, or input information about a particular chemical into the program and watch how the chemical would interact with bodily systems in real life. Eventually, this kind of simulation could help test new drugs before researchers try them on animals or people. CAVEman could also provide great training for surgeons or other medical students who need plenty of hands-on experience.

The CAVEman project was completed in May 2007. Maybe in the distant future, we’ll each have our DNA mapped to a virtual body floating in a “cave” somewhere. Whenever you get sick, doctors could test drugs or procedures on the virtual-you first! Now for your ideas! How else could a four-dimensional computer model of the human body help science? What problems might it make easier to solve?

Email your response to [email protected] or write to: VIRTUAL BODY, ODYSSEY, 30 Grove Street, Suite C, Peterborough, NH 03458.

Anatomically – Relating to the structure of a human or animal body

Simulation – A representation or model of a physical system for use in experimental testing

Music of the Future?
September 2008

Your parents probably stare in awe at your MP3 player, but when you have kids, you may have a whole new music format to wonder about. You might find yourself asking, “Is that song you’re listening to really the original artists’ voice, or a clever computer imitation?”

Mark Bocko and his team at the University of Rochester in New York managed to reproduce a 20-second clarinet solo in a file a thousand times smaller than a regular MP3. How did they do it? Simple. They taught a computer how to play the clarinet! The idea is this: The sound that comes out of a musical instrument follows the laws of physics. If you can measure every factor that affects that sound, a computer program can make a sound identical to a real clarinet.

The researchers created computer models based on the physics of the clarinet and the clarinet player! That’s a lot of measurements. They modeled everything from the backpressure in the mouthpiece for all the different fingerings, to the way the player’s lips moved! Once the virtual instrument and player were ready, Bocko’s program “listened” to a real clarinet solo, and figured out which actions were necessary to create the right sounds. The program made its own “sheet music” of clarinet and clarinet player physics. When this file is fed back into the program, you hear a song that sounds a lot like the original. It’s not perfect yet, but “maybe the future of music recording lies in reproducing performers, not recording them,” says Bocko.

One clarinet is a long way from a whole band, or even a single human voice — the human vocal tract is very complex — but this kind of computer simulation may be the only way to make the smallest possible music file. MP3s and CDs contain records of sound that update (move ahead sequentially) thousands of times per second. All of these updates contain every single bit of information about the sound, and happen even when a player is holding a single note. Bocko’s file contains only the directions needed to reproduce sounds in real time. The virtual instrument and player in his computer program do the rest.

This is some amazing technology, but it certainly won’t replace real instruments or regular recordings any time soon. So don’t hope for a computer program to practice your trumpet for you!

Can you hear the difference? Go to and listen to both the real and virtual clarinet. Which do you like better? Have a friend or family member listen, but don’t tell them which is which. Can they guess?

Email the results of your experiment to [email protected] or write to: CLEAR AS A CLARINET, ODYSSEY, 30 Grove Street, Suite C, Peterborough, NH 03458.

The Checkers Solution

Chinook can beat you at checkers. Even if you’re a genius and don’t make a single bad move, the game will end in a tie. If you don’t believe it, go to and play against Chinook yourself.

Chinook is a computer program created by Jonathan Schaeffer and his colleagues at the University of Alberta (Canada). Ever since 1989, hundreds of computer processors have been analyzing 500 billion billion (yes, that’s billion billion!) possible checkers positions. In April 2007, Chinook’s creators announced that the puzzle of the game of checkers had been solved! If both players play perfectly, there is now proof that the game will always end in a tie.

Chinook took a long time to perfect its game. In 1990, the computer program entered the checkers World Championship and lost to Marion Tinsley in a grueling 39 games. Dr. Tinsley won four, lost two, and all the rest were tied. Chinook won the championship in 1994, becoming the first computer program to ever win a human world championship of any kind. Now, losing would be impossible. Chinook knows every possible move.

The way a computer plays a game like checkers is very different from how you play. Chinook has a library of opening moves played by human grandmasters, a database that traces backwards from possible endings, and an algorithm that looks ahead a few turns at all possible outcomes of each move. This kind of brute force attack on the game is also how IBM’s computer chess champion, Deep Blue, beat human grandmaster Gary Kasparov in 1997. Chess, however, is much more complicated than checkers.

If you think 500 billion billion checkers positions is a lot for even a computer to know, try (if you can) to imagine the square of that number — that’s how many positions a computer would have to analyze to solve the game of chess! “Given the effort required to solve checkers, chess will remain unsolved for a long time,” Schaeffer said in the journal Science, where he and his colleagues published their proof.

Algorithm — A logical, step-by-step procedure used for solving a mathematical problem

Wireless World Whopper!

Wow — 238 miles! That’s the distance from Washington D.C. to New York City. It’s also the world record for the longest-distance, point-to-point wireless link set on April 29, 2007, in Venezuela.

The Escuela Latinoamerica de Redes (EsLaRed), or Networking School of Latin America, identified the path between two mountain peaks, Platillón and El Águila. “It is not easy to find places that will allow for experiments at great distances, due to the curvature of the earth,” Ermanno Pietrosemoli, president of EsLaRed, told the Association for Progressive Communications (APC) in an interview.

Wireless connections require line of sight, which is why you often lose cell phone reception in hilly places. If your phone can’t “see” a tower, your voice won’t transmit. Wireless Internet connections work the same way. Satellites provide constant line of sight to almost anywhere on Earth, but cost as much as $3,000 per megabit per second! That’s fine for huge corporations, but completely out of the question for most developing countries.

Luckily, Eric Brewer and his team, Technology and Infrastructure for Emerging Regions (TIER) from the University of California at Berkeley, developed a wireless system and provided the equipment for the experiment. TIER focuses on connectivity solutions for rural areas. Their system costs only about $800 for a pair of small computers with directional antennas, and operating costs are low as well. Your usual wireless fidelity (Wi-Fi) transmitter sends its signal in all directions. The new record-setting system focuses the signal to a specific point, allowing much longer distances as long as the transmitter and receiver are correctly aligned.

Unfortunately, most places don’t have convenient mountains that will allow 238-mile links. Typically, Berkeley’s system links locations about 30 to 60 miles apart. But even these distances are “milestones” in a world where wireless usually only works at about 200 feet!

Wishing for a New, Improved Internet?

If you had a genie in a bottle and three wishes, how would you change the Internet? Don’t you think it would be nice to have better security? Fewer pop-ups? Worldwide wireless? Virtual reality? Well, the National Science Foundation (NSF) has provided funding for a different kind of genie, called the Global Environment for Network Innovations (or GENI) project. In an NSF press release, project director Chip Elliot says, “GENI will give scientists a clean slate on which to imagine a completely new Internet that will likely be materially different from that of today.”

This project is just getting started, so don’t expect your Internet wishes to come true for quite some time.

What the Internet needs, according to GENI researchers, is a totally new architecture. In computer science, architecture is the basic design skeleton that organizes a program. Rather than building this new Internet design from scratch in a lab somewhere, GENI plans to set up an experimental network environment where researchers can try out new ideas. GENI organizers believe that the Internet cannot continue to improve as a mish-mash of random ideas, and that a unified, coordinated attack on current problems is necessary. The GENI research plan states that the Internet is “based on decisions made in the 1970s that severely limit its security, availability, flexibility, and manageability.”

One of the most fascinating aspects of the Internet, however, is that no one sat down and created it. The net is the result of thousands of human minds making small changes over a period of time. Is it really a good idea to start all over? What might be lost in the transformation? What could be gained? Let us know what you think, and what you would work on if you were a researcher at GENI. Write to Internet Ideas ODYSSEY, 30 Grove Street, Suite C, Peterborough, NH 03458.

I, Neuron

Imagine if you could take a computer chip and rig it up so that it could store simple information in live neurons. That, for researchers in the field of Artificial Intelligence, would be the Holy Grail.

Well, guess what? It’s been done!

As reported in Physical Review Letters, Itay Baruchi and Eshel Ben-Jacob of Tel Aviv University in Israel have shown that it’s possible to store information in a network of neurons in a Petri dish. The biggest challenge facing the researchers was to successfully store new information inside certain cells so that they would fire without destroying their old firing patterns.

After watching the natural flow of neural transmitters, the researchers targeted specific points in the network and injected them with a chemical at three separate times. Each injection represented a simple memory. They then left the neuron system alone, but monitored the firing patterns, which revealed that the three memory patterns persisted, without interfering with each other, for more than 40 hours.

Many researchers believe that complex patterns of neuronal firing are “maps for memory,” which the brain uses when storing information. If so, then these researchers succeeded in creating the first chemically operated neuro-memory chip. Future research could help neurologists understand how our brains learn and store information.

Are You Addicted to Video Gaming?

How much time do you spend at the computer screen playing video games? Do you think you’ve become more aggressive over the years? How about your grades at school. Do you feel you’ve been able to concentrate enough to get good grades? Or are your grades falling? Are you spending less and less time with your friends and more time behind the computer playing games?

Although there’s no scientific proof . . . yet . . . a leading council of the American Medical Association (AMA) wants to have excessive video-game playing officially classified as a formal psychiatric addiction — to raise awareness and enable sufferers to get insurance coverage for treatment.

While the AMA admits that more research is needed, a recent report prepared for a its annual policy meeting strongly encouraged that video-game addiction be included in a widely used diagnostic manual of psychiatric illnesses. The AMA fears that overuse of video games and online games could become a problem in the future for children and adults. In June 2007 Houston Chronicle report on the issue, AMA’s president, Ronald Davis, says “While more study is needed on the addictive potential of video games, the AMA remains concerned about the behavioral, health and societal effects of video game and Internet overuse.”

Delegates voted to have the AMA encourage more research on the issue, including seeking studies on what amount of video-game playing and other “screen time” is appropriate for children. The AMA’s report says that up to 90 percent of American youngsters play video games and that up to 15 percent of them — more than 5 million kids — might be addicted.

Are You Being Bullied in Cyberspace?

According to research by the Pew Internet Project, one third of US teenagers have been victims of cyber-bullying — with girls were more likely than boys to be targets.

Who are the most vulnerable? Teens who share their identities online! Of the teenagers questioned, some 32 percent had a private e-mail, IM or text messaging forwarded or posted where others could see it, been the victim of an aggressive email, IM or text message — one in which they had a rumor spread about them online or had an embarrassing photograph posted online without permission.

The survey found that 39 percent of social network users had been cyber-bullied in some way, compared to 22 percent of online teens who do not use social networks.

As more and more young people join social networking sites such as MySpace and Facebook, so they are opening themselves and their personal information up to more people. How do you prevent becoming a victim? It advised youngsters not to give out personal contact details or post photographs of themselves online.

“Teletubbies” For Real!

Maybe you’ve seen them on children’s TV — those alien-looking furballs with a small screen in their tummies that can show TV pictures. Well that idea may not be too far off in the future — not the creatures but the wearable tummy TV sets.

You see, Sony Corp. of Japan has just created a razor-thin, TV monitor that’s so flexible it can bend like paper in your hand while showing full-color video! The new 2.5-inch monitor measures 0.3 millimeters, or 0.01 inch, thick. While the technology exists, Tatsuo Mori, an engineering and computer-science professor at Nagoya University, said some things still need to be “ironed out,” — like finding a way to make the screen larger, ensuring durability, and cutting costs.

The new display is a kind of “electronic paper” technology that combines an organic thin film transistor (required to make flexible displays) with an organic electroluminescent display, which delivers decent color images and is well suited for video.

What are some future application? Sony spokesman Chisato Kitsukawa said it could it could get wrapped around a lamppost, or a person’s wrist. And, like a teletubbie, be worn as clothing.” It could even be pasted like wallpaper! Talk about tuning in!


One of the greatest dangers to astronauts orbiting the earth are solar radiation storms — swarms of electrons, protons and heavy ions accelerated to high speed by explosions on the Sun. Those of us with two feet planted on the ground are not at risk, because Earth’s atmosphere protects us. Not so in space. But Arik Posner, a member of the research staff of the Southwest Research Institute in San Antonio, Texas, has found a way to give astronauts an hour’s warning before these storms hit. That’s plenty of time for these brave men and woman to take shelter!

The key to the predictions is electrons. ”Electrons are always detected ahead of the more dangerous ions,” says Posner. That’s because, the electrons, being lighter and faster than the other particles, race out ahead. In a way, they are the town criers warning us that “the ions are coming!

Now Posner has shown how a special instrument aboard the Solar and Heliospheric Observatory (SOHO) — which counts particles coming from the Sun and measures their energies — scientists can anticipate charging ions. He tested his theory by analyzing the satellite data from four major solar storms in 2003. The data predicted all four storms and gave advance warnings ranging from 7 to 74 minutes.”

Although the predictions are not perfect, Posner believes he can improve on the method. In fact, planners at the Johnson Space Center plan to use Posner’s method in their design of future lunar missions. The fact is, even though it has some kinks, the method is still more than 20 percent more reliable than current methods.

Flying High?

It’s almost here. The ride you’ve been waiting for — suborbital flight! As reported in Popular Science, New Mexico–based Virgin Galactic — a company that plans to fly 500 passengers a year to an altitude of over 60 miles — has unveiled a mockup of the interior of its SpaceShipTwo (SS2) suborbital tourist vehicle.

SS2’s fully pressurized cabin can accommodate six passengers and two pilots. It is also spacious enough for passengers to unbuckle their seatbelts and float around weightless for about five minutes before returning to Earth. And if they’re worried about those buffeting G-forces as the vehicle rockets higher, they shouldn’t be. This “bird’s” cabin has seats that automatically recline to orient the passengers’ bodies to best absorb the G-forces.

Will they be able to see Earth? You bet! The cabin is equipped with 15 windows (including several on the floor and ceiling — in case they’re floating) with vistas spanning some 1,000 miles in all directions.

So when might you expect to fly? Well, Virgin Galactic’s vehicle designer and his team plan to complete a prototype late next year, and they expect to have a first flight sometime in 2009, though some critics say that this is a bit optimistic.

Now for the big question: What’s the price?

Got $200,000?

If not, don’t worry, Virgin Galactic expects to offer lotteries and other means of getting cash-flow-challenged people on board — and that includes a reality-TV game show that is now under development!

Rare Opportunity

The Martian rover Opportunity has been on the run since January 2004. That’s rare; no one expected the little rolling robot to last so long. Now a high-resolution camera aboard NASA’s Mars Reconnaissance Orbiter has given us another first: a never-before-seen bird’s-eye view of Opportunity. The image shows the rover at the rim of Victoria Crater — an impact crater about half a mile in diameter near the equator of Mars. The image was captured by the orbiter’s High Resolution Imaging Science Experiment camera on October 3, 2006. What’s amazing is that the orbiter was 185.6 miles above Opportunity when it snapped the shot! At that distance. the image scale is 12 inches per pixel, so objects about 35 inches across are resolved. The image was taken at 3:30 p.m. local Mars time, and the Sun was about 30 degrees above the horizon.

When viewed the shot is viewed at the highest resolution, you can see Opportunity’s wheel tracks in the soil behind it, and the rover’s shadow, including the shadow of the camera mast!

Sponge Job

More than 550 million years ago, thousands of microscopic animal embryos drifted into sea water laced with sulfur compounds. These sulfides killed the animals, but they also helped to preserve them as fossils. Discovered in South China in 1998, these rare fossils show exquisite detail — specifically, various stages of successive cell division.

Now, Whitey Hagadorn of Amherst College (MA) and a team of 15 scientists from five countries have used new computer-processed X-ray images to study 162 of the most well-preserved specimens. As reported in the journal Science, the researchers digitally extracted individual cells from the embryos and then looked inside the cells. What they found were some kidney-shaped objects, which could be nuclei or other subcellular structures. They also found what appear to be cells caught in the processes of dividing. “It is amazing that such delicate biological structures can be preserved in such an ancient deposit,” says team member Shuhai Xiao, associate professor of geosciences at Virginia Tech.

If the cell division process in these samples is real, it shows that primitive embryos had already evolved the style of cell division used by modern embryos. But two key features present in modern embryos are absent in these specimens, leading the researchers to conclude that the embryos came from animals more primitive than any living today. “All the available evidence,” Xiao says, “suggests that they represent relatively simple forms, akin to sponge ancestors.”

These amazing fossils are providing us with a unique insight into primitive life from over half a billion years ago!

Intelligent Mini-Bots Fly

Swarms of intelligent unmanned aerial vehicles (UAVs) may soon be helping the military in dangerous missions. UAVs already serve the military as “eyes in the sky” for battalion commanders planning maneuvers. But these hand-launched vehicles typically require a team of trained operators on the ground to help individual crafts perform short-term missions. Now researchers at MIT (Cambridge, MA) and Boeing Corporation (Seattle, WA) are testing an intelligent airborne fleet that requires little human supervision.

In a recent MIT news release, Jonathan How of MIT’s Aeronautics and Astronautics Department says that UAV swarms will not only be intelligent (for instance, they can anticipate when they need refueling) but also be self-sufficient (they can refuel themselves at an automatic docking station). Right now, the test swarms consist of five miniature helicopters, each with four whirling blades instead of one. Each UAV is a little smaller than a sea gull, is inexpensive, and can be easily repaired or replaced.

What’s most attractive about the new vehicles is that a single operator can use a PC to command the entire system or to fly multiple UAVs simultaneously. In fact, it’s a total “hands-off” experience; the swarms are fully autonomous, meaning that software pilots the vehicle from takeoff to landing.

A fleet of UAVs could one day help the U.S. military and security agencies in difficult, often dangerous, missions such as round-the-clock surveillance, search-and-rescue operations, sniper detection, convoy protection, and border patrol. Such missions depend on “keeping vehicles in the air. The focus of this project is on persistence,” says How. Persistence requires self-sufficiency. “You don’t want 40 people on the ground operating 10 vehicles. The ultimate goal is to avoid a flight operator altogether,” he says.

Watch Out, Energizer Bunny!

Speaking of cutting-edge research going on at MIT, get this: Alan Epstein (Aeronautics and Astronautics Department) and his colleagues are working on putting a tiny, gas-turbine engine inside a silicon chip about the size of a quarter. The result will be a microelectromechanical (MEM) device that can live 10 times longer than a battery of the same weight.

The micro-engine is made of six silicon wafers, piled up like pancakes and bonded together. Each wafer is a single crystal with its atoms perfectly aligned, so it is extremely strong. To achieve the necessary components, the wafers are individually prepared, using an advanced etching process to eat away selected material. When the wafers are piled up, the surfaces and the spaces in between produce the needed features and functions.

The MIT team has now used this process to make all the components needed for their engine, and each part works. Inside a tiny combustion chamber, fuel and air quickly mix and burn at the melting point of steel. Turbine blades, made of high-strength micro-fabricated materials, spin at 20,000 revolutions per second — 100 times faster than those in jet engines! A mini-generator produces 10 watts of power. A little compressor raises the pressure of air in preparation for combustion. And cooling appears manageable by sending the compression air around the outside of the combustor.

“So, all the parts work. We’re now trying to get them all to work on the same day on the same lab bench,” Epstein says, admitting that it might take some time for that to happen. But once the engine is working, the new device could be used to power laptops, cell phones, radios, and other electronic devices. “Big gas-turbine engines can power a city,” Epstein says, but a little one could ‘power’ a person.”

Flying High?

It’s almost here. The ride you’ve been waiting for — suborbital flight! As reported in Popular Science, New Mexico–based Virgin Galactic — a company that plans to fly 500 passengers a year to an altitude of over 60 miles — has unveiled a mockup of the interior of its SpaceShipTwo (SS2) suborbital tourist vehicle.

SS2’s fully pressurized cabin can accommodate six passengers and two pilots. It is also spacious enough for passengers to unbuckle their seatbelts and float around weightless for about five minutes before returning to Earth. And if they’re worried about those buffeting G-forces as the vehicle rockets higher, they shouldn’t be. This “bird’s” cabin has seats that automatically recline to orient the passengers’ bodies to best absorb the G-forces.

Will they be able to see Earth? You bet! The cabin is equipped with 15 windows (including several on the floor and ceiling — in case they’re floating) with vistas spanning some 1,000 miles in all directions.

So when might you expect to fly? Well, Virgin Galactic’s vehicle designer and his team plan to complete a prototype late next year, and they expect to have a first flight sometime in 2009, though some critics say that this is a bit optimistic.

Now for the big question: What’s the price?

Got $200,000?

If not, don’t worry, Virgin Galactic expects to offer lotteries and other means of getting cash-flow-challenged people on board — and that includes a reality-TV game show that is now under development!

Rare Opportunity

The Martian rover Opportunity has been on the run since January 2004. That’s rare; no one expected the little rolling robot to last so long. Now a high-resolution camera aboard NASA’s Mars Reconnaissance Orbiter has given us another first: a never-before-seen bird’s-eye view of Opportunity. The image shows the rover at the rim of Victoria Crater — an impact crater about half a mile in diameter near the equator of Mars. The image was captured by the orbiter’s High Resolution Imaging Science Experiment camera on October 3, 2006. What’s amazing is that the orbiter was 185.6 miles above Opportunity when it snapped the shot! At that distance. the image scale is 12 inches per pixel, so objects about 35 inches across are resolved. The image was taken at 3:30 p.m. local Mars time, and the Sun was about 30 degrees above the horizon.

When viewed the shot is viewed at the highest resolution, you can see Opportunity’s wheel tracks in the soil behind it, and the rover’s shadow, including the shadow of the camera mast!

Spacecraft Blows Up!

It’s not what you think. This spacecraft, called Genesis I, is an inflatable one!

Last July, Bigelow Aerospace — a commercial venture with offices throughout the United States — successfully launched a 10-foot-long, 8-foot-wide, inflatable spacecraft into Earth orbit! This test craft (filled with air) is only about one-third the length of the dream craft that the company envisions — one that will perform as a future habitable space station. In fact, if you can pay, you will be able to stay at this Earth-orbiting lab (as long as you’re healthy).

Scoops Photo
Genesis I

Genesis I was launched last July 12 from a site in Yasny, Russia. It achieved orbit flawlessly at an altitude of 342 miles, and its solar panels deployed. “At this point in time, the vehicle is happy and healthy,” said company founder Robert T. Bigelow last summer. He also noted that the temperature inside the craft was a comfortable 79 degrees Fahrenheit.

Genesis I is expected to remain in orbit for two to five years before it loses momentum and burns up in the atmosphere. But there’s more hot air to rise. The company plans to send 10 inflatable test craft into orbit before it launches its space station in 2012. To keep track of developments, go to Fantasy?

“Mirroring” the Future

Mirrors that can provide a glimpse into the future are a popular theme in stories and folklore. But now a high-tech global consulting firm called Accenture ( is developing a high-tech mirror that really will let a gazer see into the future. The product’s purpose is to help people lead healthier lives.

Technically, the “future mirror” isn’t really a mirror. It’s a large LCD (Liquid Crystal Display) monitor that’s linked to a computer. Web cameras in the system take your picture, then display it (like a reflection) on the screen. More cameras and sensors around your house monitor your daily habits. For example, the system might log how many hours you exercise on your home treadmill — or sit watching TV. It might track what foods you take from the refrigerator (yogurt — or super-rich ice cream) and how much sleep you get each night. Other lifestyle habits (if an adult smokes, drinks too much, etc.) are also recorded. The system then computes the long-term health effects of a person’s lifestyle. (For example, dermatologists know that smoking reduces blood flow, a condition that can cause premature wrinkling of the skin, while exercise keeps you trim and improves circulation.)

Now for the magic. Using appearance-progression software, the system morphs the original picture taken by the Web camera and forecasts how your good (and bad) habits will modify your appearance in five or 10 years. For example, the mirror might bulk up your face if you’re likely to gain weight, or show wrinkles around your mouth and eyes. Or, it might change the your complexion (adding blotches, etc).

“Technology can be quite persuasive,” Accenture laboratory director Martin Illsey recently told New Scientist magazine. In fact, an emerging science called captology actually studies how computers can help people modify their behaviors.

A prototype of Accenture’s “future mirror” also is being tested as a tool to help patients imagine more positive body images. For example, overweight patients might be encouraged to stick to their diets if they could see a sneak peak of how much better they would look when slimmer.

Of course, there could be a downside. In Harry Potter, Harry and his friends became obsessed with seeing their future selves. Could a real “future mirror” create vain, self-centered people? Could it make them focus too much on their appearance instead of on more important qualities? Do even ordinary mirrors sometimes do that in today’s image-conscious culture?

You decide. Write to us with your “reflections” on the future mirror. Will it make a positive impact on people’s lives or not? Send your response to “Reflections,” ODYSSEY, 30 Grove St., Suite C, Peterborough, NH 03458. We’ll publish some of your responses in a future issue.

Cell Phones: How Exciting!

Hold an activated cell phone up to your ear and guess what happens? Well, if it’s the kind that emits electromagnetic fields known as Global System for Mobile communications (GSM), part of your brain’s cortex — the one nearest to the phone — gets excited.

Is that good? No one knows. Not even Italian physician Paolo Rossini of Fatebenefratelli Hospital in Milan. Rossini and his colleagues used Transcranial Magnetic Stimulation, or TMS, to check brain function while people used GMS 900 cell phones. Their study, published in the Annals of Neurology, revealed that the motor cortex became excited in 12 out of 15 young male volunteers who used the phone for 45 minutes. “Became excited” means that magnetic stimulation from the cell phone caused a muscle in that part of the brain to twitch for up to an hour after hang-up.

While the results do not suggest that using a cell phone is bad for the normal brain, they do put up a warning flag for people with conditions such as epilepsy, which is linked to brain-cell excitability. Now, here’s something with a familiar ring to it: Further studies are needed to determine what, if any, ill effects cell phones have on their users.

Over 1,000 ODYSSEY™ articles and over 8,000 articles from seven other Cobblestone Publishing magazines are available in our subscription-based online searchable archives.
Parents and teachers, try out the FREE index.