Archive for the ‘University Tech’ Category
Friday, May 3rd, 2013
Dragonfly eyes inspire wide-field camera lens.
By mimicking the bulging, bowl-shaped eyes possessed by dragonflies, praying mantises, houseflies and other insects, a team of researchers that includes a University of Colorado Boulder engineer has built an experimental digital camera that can take exceptionally wide-angle photos without distorting the image.
To create the innovative camera, which also allows for a practically infinite depth of field, the scientists used stretchable electronics and a pliable sheet of microlenses made from a material similar to that used for contact lenses. The researchers described the camera in an article published today in the journal Nature.
Conventional wide-angle lenses, such as fisheyes, distort the images they capture at the periphery, a consequence of the mismatch of light passing through a hemispherically curved surface of the lens only to be captured by the flat surface of the electronic detector.
For the digital camera described in the new study, the researchers were able to create an electronic detector that can be curved into the same hemispherical shape as the lens, eliminating the distortion.
“The most important and most revolutionizing part of this camera is to bend electronics onto a curved surface,” said Jianliang Xiao, assistant professor of mechanical engineering at CU-Boulder and co-lead author of the study. “Electronics are all made of silicon, mostly, and silicon is very brittle, so you can’t deform the silicon. Here, by using stretchable electronics we can deform the system; we can put it onto a curved surface.”
Long sought goal
Creating a camera inspired by the compound eyes of arthropods — animals with exoskeletons and jointed legs, including all insects as well as scorpions, spiders, lobsters and centipedes, among other creatures — has been a sought-after goal.
Compound eyes typically have a lower resolution than the eyes of mammals, but they give arthropods a much larger field of view than mammalian eyes as well as high sensitivity to motion and an infinite depth of field.
Compound eyes consist of a collection of smaller eyes called ommatidia, and each small eye is made up of an independent corneal lens as well as a crystalline cone, which captures the light traveling through the lens. The number of ommatidia determines the resolution and varies widely among arthropods. Dragonflies, for example, have about 28,000 tiny eyes while worker ants have only in the neighborhood of 100.
Imitating the corneal lens-crystalline cone pairings, the camera created by Xiao and his colleagues has 180 miniature lenses, each of which is backed with its own small electronic detector. The number of lenses used in the camera is similar to the number of ommatidia in the compound eyes of fire ants and bark beetles.
Can use conventional manufacturing systems
The electronics and the lenses are both flat when fabricated, said Xiao, who began working on the project as a postdoctoral researcher in John Roger’s lab at the University of Illinois at Urbana-Champaign. This allows the product to be manufactured using conventional systems.
“This is the key to our technology,” Xiao said. “We can fabricate an electronic system that is compatible with current technology. Then we can scale it up.”
The lens sheet and the electronics sheet are integrated together while flat and then molded into a hemispherical shape afterward. Each individual electronic detector and each individual lens do not deform, but the spaces between the detectors and lenses can stretch and allow for the creation of a new 3-D shape. The electronic detectors are all attached with serpentine filament bridges, which are not compromised as the material stretches and bends.
In the pictures taken by the new camera, each lens-detector pairing contributes a single pixel to the image. Moving the electronic detectors directly behind the lenses — instead of having just one detector sitting farther behind a single lens, as in conventional cameras — creates a very short focal length, which allows for the near-infinite depth of field.
The new paper demonstrates that stretchable electronics can be used as the foundation for a distortion-free hemispherical camera, but commercial production of such a camera may still be years away, Xiao said.
Tuesday, March 5th, 2013
Paul Bolls is an associate professor of strategic communication in the MU School of Journalism and a 2012 Fellow at the Reynolds Journalism Institute at MU.
As newspaper sales continue to decline, many news organizations are searching for ways to improve readership and revenues from their online presences.
Now, University of Missouri researchers have found that news organizations should target readers with certain personality traits in order to optimize their online viewership. Paul Bolls, an associate professor of strategic communication at the MU School of Journalism and a 2011-2012 MU Reynolds Journalism Institute Fellow, has found that news consumers who have “reward-seeking” personalities are more likely to read their news online and on mobile devices, and to engage with websites, by leaving comments on stories and uploading user-generated content.
In a study accepted for presentation at the 2013 International Communication Association conference in June, Bolls surveyed more than 1000 respondents and placed them into two personality groups: reward seekers and threat avoiders.
Reward seekers more active online
He found that reward seekers tend to use the Internet liberally, searching out entertainment and gratification, while threat avoiders tend to be more conservative, looking only for information that directly affects them.
Bolls found that respondents identified as reward seekers were much more likely to engage with news websites as well as more likely to use mobile devices such as smartphones and tablets to consume news. He says this knowledge should direct news organizations to target these reward seekers.
“While threat avoiders may passively view news online from time to time, reward seekers are much more likely to visit news websites and, once they are there, stay there for longer periods of time,” Bolls said.
Use “brain friendly” designs
“In order to maximize the amount of revenue they can earn online, news organizations should find ways to specifically target reward seekers and engage them with their websites. If news organizations can keep reward seekers on their sites and mobile apps, we have shown that they will willingly view many different pages, which will boost advertising revenue.”
Bolls also recommends that news organizations use “brain friendly” designs when building their websites. He says that the brain is engaged through motivation, so the most effective way to get readers to visit and stay on a website is to give them proper motivation, such as invoking emotion with stories and pictures. He also says that the simpler the design, the better.
“The brain can only process so much information at a time,” Bolls said. “Too much information can overload it and cancel out understanding and retention. Consuming news and advertising involves receiving information, adding previously held knowledge for context, and then storage of the new information.
“These steps need to be in balance. If a reader has to work too hard to find the stories they are looking for on a news site, it can defeat their brain’s ability to add context and store the new information for the future. Keeping it simple is key.”
Friday, February 22nd, 2013
There is research that is off the wall, some off the charts and some off the planet, such as what a Texas A&M University aerospace and physics professor is exploring.
It’s a plan to deflect a killer asteroid by using paint, and the science behind it is absolutely rock solid, so to speak, so much so that NASA is getting involved and wants to know much more.
We don’t run a lot of space science related stories on the TechJournal, but we thought this one might interest our tech-focused audience, particularly since a rather large space rock just caused havoc in Russia and another recently passed Earth more closely than some of its artificial satellites in orbit.
Dave Hyland , professor of physics and astronomy and also a faculty member in the aerospace engineering department at Texas A&M and a researcher with more than 30 years of awards and notable grants, says one possible way to avert an asteroid collision with Earth is by using a process called “tribocharging powder dispensing” – as in high pressured – and spreading a thin layer of paint on an approaching asteroid, such as the one named DA14 that came within 17,000 miles on Feb. 15.
Not your standard hardware store paint
What happens is that the paint changes the amount by which the asteroid reflects sunlight, Hyland theorizes, producing a change in what is called the Yarkovski effect (which was discovered by a Russian engineer in 1902).
The force arises because on a spinning asteroid, the dusk side is warmer than the dawn side and emits more thermal photons, each photon carrying a small momentum. The unequal heating of the asteroid results in a net force strong enough to cause the asteroid to shift from its current orbit, Hyland further theorizes.
The kind of paint used is not the kind found at your local hardware store, Hyland explains.
“It could not be a water-based or oil-based paint because it would probably explode within seconds of it entering space,” he notes.
Test it in space
“But a powdered form of paint could be used to dust on the asteroid and the sun would then do the rest. It cures the paint to give a smooth coating, and would change the unequal heating of the asteroid so that it would be forced off its current path and placed on either a higher or lower orbit, thus missing Earth.
“I have to admit the concept does sound strange, but the odds are very high that such a plan would be successful and would be relatively inexpensive. The science behind the theory is sound. We need to test it in space.”
As for getting the paint on the asteroid, a practical way to do this was discovered by a former student of Hyland’s, Shen Ge, who has since started a new space company.
The “tribocharging powder dispenser” would spray a mixture of inert gas and charged dry-paint powder at the asteroid that would attract the powder to its surface through electrostatics. Then solar wind and UV radiation would cure the powder, giving a smooth, thin coat on the surface.
Getting the paint in the asteroid’s path in a timely manner will certainly be a challenge, Hyland observes.
“The tribocharged powder process is a widely used method of painting many products,” he says. “It remains only to adapt the technology to space conditions.”
NASA has approached Hyland for developing such a project to test the theory, and the Earth may need it quickly. An asteroid called Apophis is due in 2029 and will come closer than many communications satellites in orbit right now. It will fly by on April 13 (Friday the 13 to be exact) of 2029 and make a return trip in 2036, and it’s estimated to be more than 1,000 feet in length and is appropriately named for an evil Egyptian god of chaos and destruction. There is no chance of its hitting Earth in 2029, but a small chance in the next close approach in 2036, Hyland notes.
Earth hit before
Asteroids have hit Earth before. One hit off the Yucatan coast of Mexico about 65 million years ago and is believed to have caused the eventual extinction of the dinosaurs.
And in 1908, the fabled “Tunguska event” occurred in Siberia in which an asteroid or meteor exploded several miles above the Earth, flattening trees and killing livestock over 800 square miles. The explosion is now estimated to have been 1,000 times more powerful than the A-bomb dropped on Hiroshima.
“There are thousands of asteroids out there, and only a small percentage of them are known and can be tracked as they approach Earth,” Hyland adds.
“The smaller ones, like DA14 are not discovered as soon as others, and they could still cause a lot of damage should they hit Earth. It is really important for our long-term survival that we concentrate much more effort discovering and tracking them, and developing as many useful technologies as possible for deflecting them.”
Tuesday, January 8th, 2013
This image shows the misfit scales found on the lantern of the Photuris firefly. Researchers found that the sharp edges of the scales let out the most light. Credit: Optics Express.
The nighttime twinkling of fireflies has inspired scientists to modify a light-emitting diode (LED) so it is more than one and a half times as efficient as the original.
Researchers from Belgium, France, and Canada studied the internal structure of firefly lanterns, the organs on the bioluminescent insects’ abdomens that flash to attract mates.
The scientists identified an unexpected pattern of jagged scales that enhanced the lanterns’ glow, and applied that knowledge to LED design to create an LED overlayer that mimicked the natural structure.
The overlayer, which increased LED light extraction by up to 55 percent, could be easily tailored to existing diode designs to help humans light up the night while using less energy. The work is published in a pair of papers today in the Optical Society’s open-access journal Optics Express.
Learning from nature
“The most important aspect of this work is that it shows how much we can learn by carefully observing nature,” says Annick Bay, a Ph.D. student at the University of Namur in Belgium who studies natural photonic structures, including beetle scales and butterfly wings.
Fireflies create light through a chemical reaction that takes place in specialized cells called photocytes.
The light is emitted through a part of the insect’s exoskeleton called the cuticle. Light travels through the cuticle more slowly than it travels through air, and the mismatch means a proportion of the light is reflected back into the lantern, dimming the glow.
The unique surface geometry of some fireflies’ cuticles, however, can help minimize internal reflections, meaning more light escapes to reach the eyes of potential firefly suitors.
How the features can enhance LED design
In Optics Express papers, Bay, Vigneron, and colleagues first describe the intricate structures they saw when they examined firefly lanterns and then present how the same features could enhance LED design. Using scanning electron microscopes, the researchers identified structures such as nanoscale ribs and larger, misfit scales, on the fireflies’ cuticles.
When the researchers used computer simulations to model how the structures affected light transmission they found that the sharp edges of the jagged, misfit scales let out the most light. The finding was confirmed experimentally when the researchers observed the edges glowing the brightest when the cuticle was illuminated from below.
“We refer to the edge structures as having a factory roof shape,” says Bay. “The tips of the scales protrude and have a tilted slope, like a factory roof.”
The protrusions repeat approximately every 10 micrometers, with a height of approximately 3 micrometers. “In the beginning we thought smaller nanoscale structures would be most important, but surprisingly in the end we found the structure that was the most effective in improving light extraction was this big-scale structure,” says Bay.
Human-made light-emitting devices like LEDs face the same internal reflection problems as fireflies’ lanterns and Bay and her colleagues thought a factory roof-shaped coating could make LEDs brighter.
Thursday, November 15th, 2012
Researchers say they can boost the speed of public WiFi networks by up to 700 percent.
Have you struggled to log on to crowded WiFi hotspots or found the connection cripplingly slow? Researchers at North Carolina State University say they have a solution.
As many WiFi users know, WiFi performance is often poor in areas where there are a lot of users, such as airports or coffee shops and tech events.
WiFox speeds traffic by up to 700 percent
But researchers at NC State University have developed a new software program, called WiFox, which can be incorporated into existing networks and expedites data traffic in large audience WiFi environments – improving data throughput by up to 700 percent.
WiFi traffic gets slowed down in high-population environments because computer users and the WiFi access point they are connected to have to send data back and forth via a single channel.
If a large number of users are submitting data requests on that channel, it is more difficult for the access point to send them back the data they requested.
Similarly, if the access point is permanently given a high priority – enabling it to override user requests in order to send out its data – users would have trouble submitting their data requests. Either way, things slow down when there is a data traffic jam on the shared channel.
Now NC State researchers have created WiFox, which monitors the amount of traffic on a WiFi channel and grants an access point priority to send its data when it detects that the access point is developing a backlog of data.
The amount of priority the access point is given depends on the size of the backlog – the longer the backlog, the higher the priority. In effect, the program acts like a traffic cop, keeping the data traffic moving smoothly in both directions.
The more users, the better it performs
The research team tested the program on a real WiFi system in their lab, which can handle up to 45 users. They found that the more users on the system, the more the new program improved data throughput performance. Improvements ranged from 400 percent with approximately 25 users to 700 percent when there were around 45 users.
This translates to the WiFi system being able to respond to user requests an average of four times faster than a WiFi network that does not use WiFox.
“One of the nice things about this mechanism is that it can be packaged as a software update that can be incorporated into existing WiFi networks,” says Arpit Gupta, a Ph.D. student in computer science at NC State and lead author of a paper describing the work. “WiFox can be incorporated without overhauling a system.”
Thursday, October 11th, 2012
Companies and marketers now have access to a seemingly endless array of data on consumers’ opinions and experiences via blogs, online forums and product reviews.
In principle, businesses should be able to use this information to gain a better understanding of the general market and of their own and their competitors’ customers. But his wealth of consumer-generated content can be both a blessing and a curse.
A new approach, described in a study by Oded Netzer, the Philip H. Geier Jr. Associate Professor at Columbia Business School, offers a way to efficiently aggregate and analyze this content.
The study, co-authored with Ronen Feldman and Moshe Fresko of Hebrew University, and Jacob Goldenberg, visiting professor at Columbia Business School and Professor of Marketing at the Hebrew University of Jerusalem, shows how text mining—the process of extracting useful information from unstructured text—combined with network-analysis tools can help businesses leverage the web as a marketing research playground, generating meaningful insights on market structure and the competitive landscape without asking consumers a single question.
New text-mining tool proved effective
The researchers developed a text-mining tool specifically designed for the complexity of consumer forums, as well as a method of converting this information into quantifiable perceptual associations and similarities between brands.
Companies can use this method to monitor their market positions over time—with greater detail and at a lower cost than through traditional methods based on sales and survey data.
The method proved successful in empirical tests, including one focused on consumer forums about sedan cars. In the first test, the researchers downloaded data from the sedan forum Edmunds.com, and text mined more than 860,000 consumer messages, consisting of close to six million sentences posted by about 76,000 unique consumers between 2001 and 2007.
Using their combination of text mining and network-analysis techniques, they created visual maps of consumer perceptions and discussions about 169 different sedan cars.
The maps can be used to evaluate the competitive market structure, and assess the effectiveness of marketing campaigns.
The study demonstrates how a large scale marketing campaign by Cadillac aimed at positioning Cadillac as a stronger rival for import luxury cars was indeed able to move the needle in terms of consumers’ top-of-mind association of Cadillac.
This new method can also be used to analyze more structured textual data such as blogs, reviews, and articles to explore consumer perceptions and opinions.
Monday, October 1st, 2012
The UT^2 faces off against an opponent in the BotPrize.
An artificially intelligent virtual gamer created by computer scientists at The University of Texas at Austin has won the BotPrize by convincing a panel of judges that it was more human-like than half the humans it competed against.
The competition was sponsored by 2K Games and was set inside the virtual world of “Unreal Tournament 2004,” a first-person shooter video game.
“The idea is to evaluate how we can make game bots, which are nonplayer characters (NPCs) controlled by AI algorithms, appear as human as possible,” said Risto Miikkulainen, professor of computer science in the College of Natural Sciences. Miikkulainen created the bot, called the UT^2 game bot, with doctoral students Jacob Schrum and Igor Karpov.
Bots face off in tournament play
The bots face off in a tournament against one another and about an equal number of humans, with each player trying to score points by eliminating its opponents. Each player also has a “judging gun” in addition to its usual complement of weapons. That gun is used to tag opponents as human or bot.
The bot that is scored as most human-like by the human judges is named the winner. UT^2, which won a warm-up competition last month, shared the honors with MirrorBot, which was programmed by Romanian computer scientist Mihai Polceanu.
Winning bots did better than humans
The winning bots both achieved a humanness rating of 52 percent. Human players received an average humanness rating of only 40 percent. The two winning teams will split the $7,000 first prize.
The victory comes 100 years after the birth of mathematician and computer scientist Alan Turing, whose “Turing test” stands as one of the foundational definitions of what constitutes true machine intelligence. Turing argued that we will never be able to see inside a machine’s hypothetical consciousness, so the best measure of machine sentience is whether it can fool us into believing it is human.
“When this ‘Turing test for game bots’ competition was started, the goal was 50 percent humanness,” said Miikkulainen. “It took us five years to get there, but that level was finally reached last week, and it’s not a fluke.”
Bots mimic humans in several ways
The complex gameplay and 3-D environments of “Unreal Tournament 2004” require that bots mimic humans in a number of ways, including moving around in 3-D space, engaging in chaotic combat against multiple opponents and reasoning about the best strategy at any given point in the game. Even displays of distinctively human irrational behavior can, in some cases, be emulated.
“People tend to tenaciously pursue specific opponents without regard for optimality,” said Schrum. “When humans have a grudge, they’ll chase after an enemy even when it’s not in their interests. We can mimic that behavior.”
In order to most convincingly mimic as much of the range of human behavior as possible, the team takes a two-pronged approach.
Some behavior is modeled directly on previously observed human behavior, while the central battle behaviors are developed through a process called neuroevolution, which runs artificially intelligent neural networks through a survival-of-the-fittest gauntlet that is modeled on the biological process of evolution.
Defining “human-like” a challenge
“In the case of the BotPrize,” said Schrum, “a great deal of the challenge is in defining what ‘human-like’ is, and then setting constraints upon the neural networks so that they evolve toward that behavior.
“If we just set the goal as eliminating one’s enemies, a bot will evolve toward having perfect aim, which is not very human-like. So we impose constraints on the bot’s aim, such that rapid movements and long distances decrease accuracy.
By evolving for good performance under such behavioral constraints, the bot’s skill is optimized within human limitations, resulting in behavior that is good but still human-like.”
Miikkulainen said that methods developed for the BotPrize competition should eventually be useful not just in developing games that are more entertaining, but also in creating virtual training environments that are more realistic, and even in building robots that interact with humans in more pleasant and effective ways.
Download videos from the competition.
Monday, August 20th, 2012
The experimental setup of a proposed glasses-free 3-D theater experience is shown, with the projector in the familiar front position, creating 3-D images. Credit: Optics Express.
From the early days of cinema, film producers have used various techniques to create the illusion of depth – with mixed results.
In the early 1950s, people complained that wearing the glasses needed to see 3D effects in films such as “Creature from the Black Lagoon” gave them headaches.
But even with digital technology, the latest Hollywood blockbusters still rely on clunky glasses to achieve a convincing 3-D effect.
And just about everyone we know hates those glasses.
New optics research by a team of South Korean investigators offers the prospect of glasses-free, 3-D display technology for commercial theaters.
Their new technique, described in a paper published today in the Optical Society’s (OSA) open-access journal Optics Express, can bring this added dimension while using space more efficiently and at a lower cost than current 3-D projection technology.
Taking the next step
“There has been much progress in the last 10 years in improving the viewers’ experience with 3-D,” notes the team’s lead researcher Byoungho Lee, professor at the School of Electrical Engineering, Seoul National University in South Korea.
“We want to take it to the next step with a method that, if validated by further research, might constitute a simple, compact, and cost-effective approach to producing widely available 3-D cinema, while also eliminating the need for wearing polarizing glasses.”
Polarization is one of the fundamental properties of light; it describes how light waves vibrate in a particular direction—up and down, side-to-side, or anywhere in between. Sunlight, for example, vibrates in many directions.
To create modern 3-D effects, movie theaters use linearly or circularly polarized light. In this technique, two projectors display two similar images, which are slightly offset, simultaneously on a single screen.
Each projector allows only one state of polarized light to pass through its lens. By donning the familiar polarized glasses, each eye perceives only one of the offset images, creating the depth cues that the brain interprets as three dimensions.
The two-projector method, however, is cumbersome, so optical engineers have developed various single projector methods to achieve similar effects. The parallax barrier method, for example, succeeds in creating the illusion of 3-D, but it is cumbersome as well, as it requires a combination of rear projection video and physical barriers or optics between the screen and the viewer.
Think of these obstructions as the slats in a venetian blind, which create a 3-D effect by limiting the image each eye sees. The South Korean team has developed a new way to achieve the same glasses-free experience while using a single front projector against a screen.
In their system, the Venetian blinds’ “slat” effect is achieved by using polarizers, which stop the passage of light after it reflects off the screen.
To block the necessary portion of light, the researchers added a specialized coating to the screen known as a quarter-wave retarding film. This film changes the polarization state of light so it can no longer pass through the polarizers.
As the light passes back either through or between the polarizing slates, the offset effect is created, producing the depth cues that give a convincing 3-D effect to the viewer, without the need for glasses.
Can be used in two types of 3D displays
The team’s experimental results reported today show the method can be used successfully in two types of 3-D displays.
The first is the parallax barrier method, described above, which uses a device placed in front of a screen enabling each eye to see slightly different, offset images. The other projection method is integral imaging, which uses a two-dimensional array of many small lenses or holes to create 3-D effects.
“Our results confirm the feasibility of this approach, and we believe that this proposed method may be useful for developing the next generation of a glasses-free projection-type 3-D display for commercial theaters,” notes Lee.
As a next step in their research, the team hopes to refine the method, and apply it to developing other single-projector, frontal methods of 3-D display, using technologies such as passive polarization-activated lens arrays and the lenticular lens approach.
While their experimental results are promising, it may be several years until this technology can be effectively deployed in your local movie theater for you to enjoy without polarizing glasses.
Paper: “A frontal projection-type three-dimensional display,” Optics Express, Vol. 20, Issue 18, pp. 20130-20138 (2012).
Thursday, August 9th, 2012
Wei Peng, MSU associate professor of telecommunication, information studies and media, says exercise video games, while not perfect, can be helpful in getting some people to be more active.
Active video games, also known as “exergames,” are not the perfect solution to the nation’s sedentary ways, but they can play a role in getting some people to be more active.
Michigan State University’s Wei Peng reviewed published research of studies of these games and says that most of the AVGs provide only “light-to-moderate” intensity physical activity.
And that, she says, is not nearly as good as what she calls “real-life exercise.”
Could be a good step
“For those not engaging in real-life exercise, this may be a good step toward this,” said Peng, an assistant professor of telecommunication, information studies and media. “Eventually the goal is to help them get somewhat active and maybe move to real-life exercise.”
Of the 41 AVG studies the researchers looked at, only three of them proved to be an effective tool in increasing physical activity.
“Some people are very enthusiastic about exergames,” Peng said. “They think this will be the perfect solution to solve the problem of sedentary behavior. But it’s not that easy.”
Most game activity light
It’s generally recommended that the average adult get 30 minutes of moderate to vigorous exercise each day. Unfortunately, most of the games that were studied provided only light activity, “so they were not meeting the recommendations,” Peng said.
However, for some populations light-to-moderate activity can sometimes be enough.
“The games do have the potential to be useful,” Peng said, “especially for populations that are more suitable to light-to-moderate activity – seniors, for example.”
Peng said exergames also have proven to be useful when used in structured exercise programs, such as those used for rehabilitation or in senior citizen centers.
Structured program would be better
“Just giving the games to people may not be a good approach,” Peng said. “They may not use it or use it effectively. It’s better if used in a structured program where there are more people participating.”
Peng and colleagues’ findings are detailed in the recent edition of the journal Health Education and Behavior.
Other authors of the paper are Julia Crouse, a doctoral student in the MSU College of Communication Arts and Sciences, and Jih-Hsuan Lin, a faculty member at the National Chiao Tung University in Taiwan.
The research was funded by a grant from the Robert Wood Johnson Foundation’s Pioneer Portfolio through its national program, Health Games Research.
Monday, February 6th, 2012
“Online dating is definitely a new and much needed twist on relationships,” says Harry Reis, a co-authors of the study and professor of psychology at the University of Rochester. Photo courtesy of University of Rochester
Online dating has not only shed its stigma, it has surpassed all forms of matchmaking in the United States other than meeting through friends, according to a new analysis of research on the burgeoning relationship industry.
The digital revolution in romance is a boon to lonely-hearters, providing greater and more convenient access to potential partners, reports the team of psychological scientists who prepared the review.
But the industry’s claims to offering a “science-based” approach with sophisticated algorithm-based matching have not been substantiated by independent researchers and, therefore, “should be given little credence,” they conclude.
A new and much needed twist
“Online dating is definitely a new and much needed twist on relationships,” says Harry Reis, one of the five co-authors of the study and professor of psychology at the University of Rochester.
Behavioral economics has shown that the dating market for singles in Western society is grossly inefficient, especially once individuals exit high school or college, he explains.
“The Internet holds great promise for helping adults form healthy and supportive romantic partnerships, and those relationships are one of the best predictors of emotional and physical health,” says Reis.
But online love has its pitfalls, Reis cautions.
Comparing dozens and sometimes hundreds of possible dates may encourage a “shopping” mentality in which people become judgmental and picky, focusing exclusively on a narrow set of criteria like attractiveness or interests. And corresponding by computer for weeks or months before meeting face-to-face has been shown to create unrealistic expectations, he says.
For a mobile take on online dating see: Meet.com wants mobile app to end online dating woes.
The 64-page analysis reviews more than 400 psychology studies and public interest surveys, painting a full and fascinating picture of an industry that, according to one industry estimate, attracted 25 million unique users around the world in April 2011 alone. The report was commissioned by the Association for Psychological Science and will be published in the February edition of its journal Psychological Science in the Public Interest.
Other highlights from the analysis include:
Online dating has become the second-most-common way for couples to meet, behind only meeting through friends. According to research by Michael Rosenfeld from Stanford University and Reuben Thomas from City College of New York, in the early 1990s, less than 1 percent of the population met partners through printed personal advertisements or other commercial intermediaries.
By 2005, among single adults Americans who were Internet users and currently seeking a romantic partner, 37 percent had dated online. By 2007-2009, 22 percent of heterosexual couples and 61 percent of same-sex couples had found their partners through the Web. Those percentages are likely even larger today, the authors write.
Attitudes have changed radically. Through the 1980s and into the 1990s, a stigma was associated with personal advertisements that initially extended to online dating. But today, “online dating has entered the mainstream, and it is fast shedding any lingering social stigma,” the authors write.
Men and women behave differently online.
- A 2010 study of 6,485 users of a major online dating site found that men viewed three times more profiles than women did (597,169 to 196,363).
- Men were approximately 40 percent more likely to initiate contact with a woman after viewing her profile than women were after viewing a man’s profile (12.5 to 9 percent).
Online sites may encourage “soulmate” search. The authors caution that matching sites’ emphasis on finding a perfect match, or soulmate, may encourage an unrealistic and destructive approach to relationships.
“People with strong beliefs in romantic destiny (sometimes calledsoulmate beliefs) — that a relationship between two people either is or is not ‘meant to be’ — are especially likely to exit a romantic relationship when problems arise … and to become vengeful in response to partner aggression when they feel insecure in the relationship,” the authors write.
Online dating sites are not “scientific”. Despite claims of using a “science-based” approach with sophisticated algorithm-based matching, the authors found “no published, peer-reviewed papers – or Internet postings, for that matter – that explained in sufficient detail … the criteria used by dating sites for matching or for selecting which profiles a user gets to peruse.” Instead, research touted by online sites is conducted in-house with study methods and data collection treated as proprietary secrets, and, therefore, not verifiable by outside parties.
Online dating fundamentally changes access to information. “In the words of one online dater: ‘Where else can you go in a matter of 20 minutes [and] look at 200 women who are single and want to go on dates?’ “
Friday, February 3rd, 2012
Researchers from Carnegie Mellon University have published a new study that refutes three key criticisms of crowdsourcing, a popular tool for new idea generation for firms as they seek to develop new products and services and to improve on their existing offerings in an increasingly competitive marketplace.
The study finds that crowdsourcing is not the misguided fad that some critics have suggested but that the process of crowdsourcing actually — under the right conditions — creates more knowledgeable consumers and, in time, leads to more efficient, lower-cost generation of high potential ideas.
The study, “Crowdsourcing New Product Ideas Under Consumer Learning,” was conducted by Kannan Srinivasan, the Rohet Tolani Distinguished Professor of International Business and the H.J. Heinz II Professor of Management, Marketing and Information Systems at Carnegie Mellon’s Tepper School of Business; Assistant Professor of Information Systems Param Vir Singh, also of the Tepper School; and Yan Huang, a Ph.D. student at the Heinz College at Carnegie Mellon.
The team set out to investigate the three most common criticisms of crowdsourcing: that individuals’ limited view about firms’ products leads to the contribution of mainly niche ideas; that consumers’ limited knowledge about firms’ cost structure leads to too many infeasible ideas; and that firms’ lack of response to customers’ ideas leads to customer dissatisfaction.
“Although crowdsourcing initiatives are being widely adopted in many different industries, the number of ideas generated often declines over time, and implementation rates are quite low,” Srinivasan said.
Understand the dynamics to find valuable ideas
“Our findings, however, suggest that a better understanding of the dynamics at work in the crowdsourcing process can help us to address the common criticisms and propose policies that draw out the most consistently valuable ideas with the highest potential for implementation from crowdsourcing efforts in virtually any industry.”
The policies suggested by the study for effective crowdsourcing rely on the implementation of a system for peer evaluation, rapid company response to ideas that receive significant positive endorsement from the community of idea contributors, provision of precise cost signals that enable contributors to assess the feasibility of their ideas, and a system to reward contributors whose ideas are implemented rather than one that rewards individuals when they post ideas.
“Using a peer voting system, consumers are empowered to both contribute their own ideas and vote on the ideas submitted by others, enabling firms to infer the true potential of ideas as they begin to screen for ideas that are truly worthy of implementation,” Singh said.
Singh added that the initial field of ideas generated in a crowdsourcing effort tends to be overcrowded with ideas that are unlikely to be implemented as consumers overestimate the potential of their ideas and underestimate the cost of implementation.
“However, individuals learn about their abilities to come up with high-potential ideas as well as the cost structure of a firm through peer voting and the firm’s response to contributed ideas, and individuals whose ideas do not earn the favor of their peers or the backing of the firm drop out of the process while contributors of high-potential ideas remain active,” he said.
“Over time, the quality of generated ideas — in terms of their actual potential for implementation — improves while the total number of ideas contributed through crowdsourcing decreases,” Huang said.
“So, the cost to screen contributed ideas is reduced, the efficiency of the process is increased and the crowdsourcing initiative yields high-value ideas with the greatest potential for implementation.”
Although crowdsourcing initiatives have become rapidly popular, the usefulness of this relatively new approach to idea generation has been heavily debated. There have been few academic studies of crowdsourcing despite the enormous business and media attention the topic has attracted, and this study by the team at Carnegie Mellon proposes answers to some of the most hotly contested concerns regarding the value of these initiatives.
Findings of the study in detail
Thursday, January 26th, 2012
The first systematic power profiles of microprocessors could help lower the energy consumption of both small cell phones and giant data centers, report computer science professors from The University of Texas at Austin and the Australian National University.
Their results may point the way to how companies like Google, Apple, Intel and Microsoft can make software and hardware that will lower the energy costs of very small and very large devices.
“The less power cell phones draw, the longer the battery will last,” says Kathryn McKinley, professor of computer science at The University of Texas at Austin.
“For companies like Google and Microsoft, which run these enormous data centers, there is a big incentive to find ways to be more power efficient. More and more of the money they’re spending isn’t going toward buying the hardware, but toward the power the datacenters draw.”
McKinley says that without detailed power profiles of how microprocessors function with different software and different chip architectures, companies are limited in terms of how well they can optimize for energy usage.
The study she conducted with Stephen M. Blackburn of The Australian National University and their graduate students is the first to systematically measure and analyze application power, performance, and energy on a wide variety of hardware.
This work was recently invited to appear as a Research Highlight in the Communications of the Association for Computer Machinery (CACM). It’s also been selected as one of this year’s “most significant research papers in computer architecture based on novelty and long-term impact” by the journal IEEE Micro.
Measurements no one did before
“We did some measurements that no one else had done before,” says McKinley. “We showed that different software, and different classes of software, have really different power usage.”
McKinley says that such an analysis has become necessary as both the culture and the technologies of computing have shifted over the past decade.
Energy efficiency has become a greater priority for consumers, manufacturers and governments because the shrinking of processor technology has stopped yielding exponential gains in power and performance. The result of these shifts is that hardware and software designers have to take into account tradeoffs between performance and power in a way they did not ten years ago.
“Say you want to get an application on your phone that’s GPS-based,” says McKinley, “In terms of energy, the GPS is one of the most expensive functions on your phone. A bad algorithm might ping your GPS far more than is necessary for the application to function well. If the application writer could analyze the power profile, they would be motivated to write an algorithm that pings it half as often to save energy without compromising functionality.”
McKinley believes that the future of software and hardware design is one in which power profiles become a consideration at every stage of the process.
Intel, for instance, has just released a chip with an exposed power meter, so that software developers can access some information about the power profiles of their products when run on that chip. McKinley expects that future generations of chips will expose even more fine-grained information about power usage.
Even consumers may get app energy use info
Software developers like Microsoft (where McKinley is spending the next year, while taking a leave from the university) are already using what information they have to inform their designs.
And device manufacturers are testing out different architectures for their phones or tablets that optimize for power usage.
McKinley says that even consumers may get information about how much power a given app on their smart phone is going to draw before deciding whether to install it or not.
“In the past, we optimized only for performance,” she says. “If you were picking between two software algorithms, or chips, or devices, you picked the faster one. You didn’t worry about how much power it was drawing from the wall socket. There are still many situations today—for example, if you are making software for stock market traders—where speed is going to be the only consideration. But there are a lot of other areas where you really want to consider the power usage.”
Wednesday, January 18th, 2012
While 57 percent see the current U.S. business environment as somewhat or much better than the average advanced economy, respondents are much less optimistic about the trajectory of the U.S. as a competitive location, according to the the results of Harvard Business School’s first Survey on U.S. Competitiveness.
When asked to assess how the trajectory of the U.S. business environment compares with emerging markets, 66 percent see the U.S. falling behind, while just 8 percent see it pulling ahead. Along with HBS Dean Nitin Nohria, Professors Michael E. Porter and Jan W. Rivkin presented the findings at the National Press Club in Washington, D.C.
Father of competitiveness strategy
Porter is a leading authority on economic competition, Porter is generally recognized as the father of the modern strategy field, as has been identified in a variety of rankings and surveys as the world’s most influential thinker on management and competitiveness.
The survey also examines the desirability of the U.S. as a business location and decisions by firms to relocate existing activities or establish new ones. Of 1,767 cases where respondents had been personally involved in U.S.-related location decisions within the past year, 57 percent considered the possibility of moving existing activity out of the U.S., while only 9 percent considered moving existing activities into the United States.
The remaining 34 percent weighed decisions to set up new activities. Of those offshoring decisions that had been resolved by the time of the survey, the U.S. lost the activity 84 percent of the time. While the country fared better in potential onshoring or new activity decisions (75 percent and 51 percent win-rates, respectively), its overall win record totals just 32 percent.
U.S. losing out on business location decisions
“The U.S. is losing out on business location decisions at an alarming rate, and those activities being offshored are more job-rich than those coming in,” said Porter, the Bishop William Lawrence University Professor at Harvard and head of the Institute for Strategy and Competitiveness at HBS.
“However, the U.S. retains its core strengths in a number of important areas such as university education, innovation, and entrepreneurship, which means that we have the resources to reverse this trend. The vast amount of data from this survey highlights the need for business leaders, policymakers, and academics to collaborate on practical ways to make progress.”
The survey is part of the School’s ongoing U.S. Competitiveness Project, which defines competitiveness as “the ability of companies in the U.S. to compete successfully in the global economy while supporting high and rising living standards for Americans.”
“When we were first laying the groundwork for this Project and this survey, we thought long and hard about how competitiveness should be defined, and why it was such an important goal for the nation’s future,” said Dean Nohria.
“We made sure not to focus on job growth or inequality alone, because that ignores the need for healthy wages that will support America’s middle class. Adopting a broader definition was paramount in this effort.”
Other major findings include:
- While the negative view of the future of U.S. competitiveness is widely shared among respondents, different perceptions across groups exist. For instance, respondents between the ages of 40 and 60 are most likely to expect a decline (more than 70 percent thought so) and least likely to foresee a gain (less than 15 percent). Similarly, alumni in America are more pessimistic about the country’s future competitiveness than their counterparts outside the U.S.
- Of activities reported to have been moved out of the country in the past, 11 percent consisted of 1,000 or more jobs, while only 5 percent of activities considered for movement but retained in the U.S. consisted of 1,000 or more jobs (none moving to the U.S. consisted of 1,000 or more jobs).
- Of the 1,005 location decisions about potentially moving out of the U.S., the most common alternatives considered wereChina (42 percent), India (38 percent), Brazil (15 percent), Mexico (15 percent), and Singapore (12 percent).
Greatest impediments to creating jobs
The survey also asked respondents about the greatest impediments their firms faced in investing in and creating jobs in the United States. Policy-related factors like regulation and taxes are cited as major factors, along with talent-related issues like personnel cost and immigration issues.
“One of the most important aspects of this survey was its effort to pinpoint the roots of the country’s competitiveness problem,” said Rivkin, the School’s Bruce V. Rauner Professor of Business Administration.
“The findings allow us to assess whether individual elements of the U.S. business environment, such as the complexity of our tax code or our K-12 education system, each strengthens or weakens U.S. competitiveness. This provides important insight for leaders who are seeking ways to boost America’s long-run prosperity.”
Wednesday, January 18th, 2012
Listen up, pedestrians wearing headphones. Can you hear the trains or cars around you? Many probably can’t, especially young adult males.
Serious injuries to pedestrians listening to headphones have more than tripled in six years, according to new research from the University of Maryland School of Medicine and the University of Maryland Medical Center in Baltimore.
In many cases, the cars or trains are sounding horns that the pedestrians cannot hear, leading to fatalities in nearly three-quarters of cases.
“Everybody is aware of the risk of cell phones and texting in automobiles, but I see more and more teens distracted with the latest devices and headphones in their ears,” says lead author Richard Lichenstein, M.D., associate professor of pediatrics at the University of Maryland School of Medicine and director of pediatric emergency medicine research at the University of Maryland Medical Center.
“Unfortunately as we make more and more enticing devices, the risk of injury from distraction and blocking out other sounds increases.”
We certainly see more people using headphones with a variety of devices. More than once we thought someone was talking to us only to discover they were talking on a cellphone with a headset. More often, we see walkers, runners, even shoppers wearing headphones connected to smartphones, MP3 players, tablets, and iPods.
Dr. Lichenstein and his colleagues studied retrospective case reports from the National Electronic Injury Surveillance System, the U.S. Consumer Product Safety Commission, Google News Archives, and Westlaw Campus Research databases for reports published between 2004 and 2011 of pedestrian injuries or fatalities from crashes involving trains or motor vehicles.
A troubling problem
Cases involving headphone use were extracted and summarized. The research is published online today in the journal Injury Prevention.
Researchers reviewed 116 accident cases from 2004 to 2011 in which injured pedestrians were documented to be using headphones. Seventy percent of the 116 accidents resulted in death to the pedestrian. More than two-thirds of victims were male (68 percent) and under the age of 30 (67 percent).
More than half of the moving vehicles involved in the accidents were trains (55 percent), and nearly a third (29 percent) of the vehicles reported sounding some type of warning horn prior to the crash. The increased incidence of accidents over the years closely corresponds to documented rising popularity of auditory technologies with headphones.
“This research is a wonderful example of taking what our physicians see every day in the hospital and applying a broader scientific view to uncover a troubling societal problem that needs greater awareness,” says E. Albert Reece, M.D., Ph.D., M.B.A., vice president for medical affairs at the University of Maryland and John Z. and Akiko K. Bowers Distinguished Professor and dean of the University of Maryland School of Medicine.
“I hope that these results will help to significantly reduce incidence of injuries and lead us to a better understanding of how such injuries occur and how we can prevent them.”
Dr. Lichenstein and his colleagues noted two likely phenomena associated with these injuries and deaths: distraction and sensory deprivation. The distraction caused by the use of electronic devices has been coined “inattentional blindness,” in which multiple stimuli divide the brain’s mental resource allocation. In cases of headphone-wearing pedestrian collisions with vehicles, the distraction is intensified by sensory deprivation, in which the pedestrian’s ability to hear a train or car warning signal is masked by the sounds produced by the portable electronic device and headphones.
Dr. Lichenstein says the study was initiated after reviewing a tragic pediatric death where a local teen died crossing railroad tracks. The teen was noted to be wearing headphones and did not avoid the oncoming train despite auditory alarms. Further review revealed other cases not only in Maryland but in other states too.
“As a pediatric emergency physician and someone interested in safety and prevention I saw this as an opportunity to — at minimum — alert parents of teens and young adults of the potential risk of wearing headphones where moving vehicles are present,” he says.
Thursday, December 15th, 2011
The coming years will bring increased personalization, innovation and flexibility in the media landscape, according to the Georgia Institute of Technology.
These findings were announced in today’s release of the FutureMedia(SM) Outlook 2012, a multimedia report that offers Georgia Tech’s annual viewpoint on the future of media and its impact on people, business and society over the next five to seven years.
“Georgia Tech’s work in Future Media is part of our new Institute for People and Technology,” said Georgia Tech President G. P. “Bud” Peterson. “By partnering with business and industry on interdisciplinary research, we are able to identify trends and challenges and work to develop transformative solutions.”
According to FutureMedia Outlook 2012, six megatrends will have a pervasive impact:
- Smart Data: In an increasingly noisy world, we’ll have to sift, filter and be smarter about what matters.
- People Platforms: Beyond “true personalization,” people will not just be consumers. They will be socially driven platforms made of algorithms from personal and associated data that they design and tailor themselves.
- Content Integrity: Pervasive mobile devices, sprawling networks, clouds and multi-layered platforms have made it more difficult to detect and address our digital vulnerabilities, drawing us to trusted content sources.
- Nimble Media: Media is evolving from a set of fixed commodities into an energetic, pervasive medium that allows people to navigate across platforms and through different content narratives.
- 6th Sense: Extraordinary innovations in mixed reality will change the way we see, hear, taste, touch, smell and make sense of the world – giving us a new and powerful 6th sense.
- Collaboration: We will harness the power of many in an increasingly conversational and participatory world.
For each of the six megatrends, the Outlook 2012 presents fresh and objective insights into those technologies and business practices that will significantly impact the converging media ecosystem. In addition, the report includes demonstrative clips and video interviews with leading Georgia Techresearchers offering real-world examples of the Institute’s innovation in these areas.
“Breakthrough research, innovation and collaboration with our partners have given us a rich and pragmatic basis from which to formulate this annual FutureMedia Outlook,” said Renu Kulkarni, founder and executive director of FutureMedia.
Wednesday, December 14th, 2011
Humble leaders are more effective and better liked, according to a study forthcoming in the Academy of Management Journal.
“Leaders of all ranks view admitting mistakes, spotlighting follower strengths and modeling teachability as being at the core of humble leadership,” says Bradley Owens, assistant professor of organization and human resources at the University at Buffalo School of Management. “And they view these three behaviors as being powerful predictors of their own as well as the organization’s growth.”
Owens and co-author David Hekman, assistant professor of managHuement at the Lubar School of Business, University of Wisconsin-Milwaukee, asked 16 CEOs, 20 mid-level leaders and 19 front-line leaders to describe in detail how humble leaders operate in the workplace and how a humble leader behaves differently than a non-humble leader.
Although the leaders were from vastly different organizations — military, manufacturing, health care, financial services, retailing and religious — they all agreed that the essence of leader humility involves modeling to followers how to grow.
Growing involves failure
“Growing and learning often involves failure and can be embarrassing,” says Owens. “But leaders who can overcome their fears and broadcast their feelings as they work through the messy internal growth process will be viewed more favorably by their followers. They also will legitimize their followers’ own growth journeys and will have higher-performing organizations.”
The researchers found that such leaders model how to be effectively human rather than superhuman and legitimize “becoming” rather than “pretending.”
But some humble leaders were more effective than others, according to the study.
Humble leaders who were young, nonwhite or female were reported as having to constantly prove their competence to followers, making their humble behaviors both more expected and less valued. However, humble leaders who were experienced white males were reported as reaping large benefits from humbly admitting mistakes, praising followers and trying to learn.
Female leaders face a double bind
In contrast, female leaders often feel they are expected to show more humility than their male counterparts, but then they have their competence called into question when they do show humility.
“Our results suggest that female leaders often experience a ‘double bind,'” Owens says. “They are expected to be strong leaders and humble females at the same time.”
Owens and Hekman offer straightforward advice to leaders. You can’t fake humility. You either genuinely want to grow and develop, or you don’t, and followers pick up on this.
Leaders who want to grow signal to followers that learning, growth, mistakes, uncertainty and false starts are normal and expected in the workplace, and this produces followers and entire organizations that constantly keep growing and improving.
A follow-up study that is forthcoming in Organization Science using data from more than 700 employees and 218 leaders confirmed that leader humility is associated with more learning-oriented teams, more engaged employees and lower voluntary employee turnover.