Walk Around In A 3D Splendid House From The Ancient Pompeii

Walk Around In A 3D Splendid House From The Ancient Pompeii

By combining traditional archaeology with 3D technology, researchers at Lund University in Sweden have managed to reconstruct a house in Pompeii to its original state before the volcano eruption of Mount Vesuvius thousands of years ago. Unique video material has now been produced, showing their creation of a 3D model of an entire block of houses.

After the catastrophic earthquake in Italy in 1980, the Pompeii city curator invited the international research community to help document the ruin city, before the state of the finds from the volcano eruption in AD 79 would deteriorate even further. The Swedish Pompeii Project was therefore started at the Swedish Institute in Rome in 2000. The researcher in charge of the rescue operation was Anne-Marie Leander Touati, at the time director of the institute in Rome, now Professor of Classical Archaeology and Ancient History at Lund University.

Since 2010, the research has been managed by the Department of Archaeology and Ancient History in Lund. The project now also includes a new branch of advanced digital archaeology, with 3D models demonstrating the completed photo documentation. The city district was scanned during the field work in 2011–2012 and the first 3D models of the ruin city have now been completed. The models show what life was like for the people of Pompeii before the volcano eruption of Mount Vesuvius. The researchers have even managed to complete a detailed reconstruction of a large house, belonging to the wealthy man Caecilius Iucundus.

“By combining new technology with more traditional methods, we can describe Pompeii in greater detail and more accurately than was previously possible”, says Nicoló Dell´Unto, digital archaeologist at Lund University.

Among other things, the researchers have uncovered floor surfaces from AD 79, performed detailed studies of the building development through history, cleaned and documented three large wealthy estates, a tavern, a laundry, a bakery and several gardens. In one garden, they discovered that some of the taps to a stunning fountain were on at the time of eruption – the water was still gushing when the rain of ash and pumice fell over Pompeii.

The researchers occasionally also found completely untouched layers. In a shop were three, amazingly enough, intact windows (made out of translucent crystalline gypsum) from Ancient Rome, stacked against each other. By studying the water and sewer systems they were able to interpret the social hierarchies at the time, and see how retailers and restaurants were dependent on large wealthy families for water, and how the conditions improved towards the end, before the eruption.

An aqueduct was built in Pompeii, enabling residents to no longer having to rely on a few deep wells or the tanks of collected rainwater in large wealthy households.

The work behind the 3D film and a discussion on the credibility of the reconstructions are presented in an article, published in SCIRES Italy.

Istituto di Scienza e Tecnologie dell’Informazione and the Humanities Lab at Lund University have contributed to the development of the material and 3D work.

Article: Reconstructing the Original Splendour of the House of Caecilius Iucundus. A Complete Methodology for Virtual Archaeology Aimed at Digital Exhibition.

via

Bug Eyes: Tiny 3D Glasses Confirm Insect 3D vision

Bug Eyes: Tiny 3D Glasses Confirm Insect 3D vision

This is a mantis wearing 3D glasses. Credit: Newcastle University

This is a mantis wearing 3D glasses.
Credit: Newcastle University

Miniature glasses have proved that mantises use 3D vision — providing a new model to improve visual perception in robots.

Most knowledge about 3D vision has come from vertebrates, however, a team from Newcastle University, UK publishing in Scientific Reports, confirm that the praying mantis, an invertebrate, does indeed use stereopsis or 3D perception for hunting.

In a specially-designed insect cinema, they have shown that it needs to be ‘old school’ 3D glasses for tests to work on mantises. While in humans that would be with red and blue lenses, red light is poorly visible to mantises so they have custom-made glasses with one blue and one green lens!

Better understanding of 3D vision

3D vision in mantises was originally shown in the 1980s by Samuel Rossel, but his work used prisms and occluders which meant that only a very limited set of images could be shown. The Newcastle University team has developed 3D glasses suitable for insects which means they can show the insects any images they want, opening up new avenues of research.

Study leader, Jenny Read, Professor of Vision Science said: “Despite their minute brains, mantises are sophisticated visual hunters which can capture prey with terrifying efficiency. We can learn a lot by studying how they perceive the world.

“Better understanding of their simpler processing systems helps us understand how 3D vision evolved, and could lead to possible new algorithms for 3D depth perception in computers.”

In the experiments, mantises fitted with tiny glasses attached with beeswax were shown short videos of simulated bugs moving around a computer screen. The mantises didn’t try to catch the bugs when they were in 2D. But when the bugs were shown in 3D, apparently floating in front of the screen, the mantises struck out at them. This shows that mantises do indeed use 3D vision.

Old-school 3D glasses

Initial testing of the most widely-used contemporary 3D technology used for humans — using circular polarization to separate the two eyes’ images — didn’t work because the insects were so close to the screen that the glasses failed to separate the two eyes’ images correctly.

Dr Vivek Nityananda, sensory biologist at Newcastle University and part of the research team continues: “When this system failed we looked at the old-style 3D glasses with red and blue lenses. Since red light is poorly visible to mantises, we used green and blue glasses and an LED monitor with unusually narrow output in the green and blue wavelength.

“We definitively demonstrated 3D vision or stereopsis in mantises and also showed that this technique can be effectively used to deliver virtual 3D stimuli to insects.”

The Newcastle University team will now continue the research examining the algorithms used for depth perception in insects to better understand how human vision evolved and to develop new ways of adding 3D technology to computers and robots.


Story Source:

The above post is reprinted from materials provided by Newcastle University. Note: Materials may be edited for content and length.

 

Playing 3-D Video Games Can Boost Memory Formation

Playing 3-D Video Games Can Boost Memory Formation

151208184338_1_900x600

Don’t put that controller down just yet. Playing three-dimensional video games — besides being lots of fun — can boost the formation of memories, according to University of California, Irvine neurobiologists.

Along with adding to the trove of research that shows these games can improve eye-hand coordination and reaction time, this finding shows the potential for novel virtual approaches to helping people who lose memory as they age or suffer from dementia. Study results appear Dec. 9 in The Journal of Neuroscience.

For their research, Craig Stark and Dane Clemenson of UCI’s Center for the Neurobiology of Learning & Memory recruited non-gamer college students to play either a video game with a passive, two-dimensional environment (“Angry Birds”) or one with an intricate, 3-D setting (“Super Mario 3D World”) for 30 minutes per day over two weeks.

Before and after the two-week period, the students took memory tests that engaged the brain’s hippocampus, the region associated with complex learning and memory. They were given a series of pictures of everyday objects to study. Then they were shown images of the same objects, new ones and others that differed slightly from the original items and asked to categorize them. Recognition of the slightly altered images requires the hippocampus, Stark said, and his earlier research had demonstrated that the ability to do this clearly declines with age. This is a large part of why it’s so difficult to learn new names or remember where you put your keys as you get older.

Students playing the 3-D video game improved their scores on the memory test, while the 2-D gamers did not. The boost was not small either. Memory performance increased by about 12 percent, the same amount it normally decreases between the ages of 45 and 70.

In previous studies on rodents, postdoctoral scholar Clemenson and others showed that exploring the environment resulted in the growth of new neurons that became entrenched in the hippocampus’ memory circuit and increased neuronal signaling networks. Stark noted some commonalities between the 3-D game the humans played and the environment the rodents explored — qualities lacking in the 2-D game.

“First, the 3-D games have a few things the 2-D ones do not,” he said. “They’ve got a lot more spatial information in there to explore. Second, they’re much more complex, with a lot more information to learn. Either way, we know this kind of learning and memory not only stimulates but requires the hippocampus.”

Stark added that it’s unclear whether the overall amount of information and complexity in the 3-D game or the spatial relationships and exploration is stimulating the hippocampus. “This is one question we’re following up on,” he said.

Unlike typical brain training programs, the professor of neurobiology & behavior pointed out, video games are not created with specific cognitive processes in mind but rather are designed to immerse users in the characters and adventure. They draw on many cognitive processes, including visual, spatial, emotional, motivational, attentional, critical thinking, problem-solving and working memory.

“It’s quite possible that by explicitly avoiding a narrow focus on a single … cognitive domain and by more closely paralleling natural experience, immersive video games may be better suited to provide enriching experiences that translate into functional gains,” Stark said.

The next step for him and his colleagues is to determine if environmental enrichment — either through 3-D video games or real-world exploration experiences — can reverse the hippocampal-dependent cognitive deficits present in older populations. This effort is funded by a $300,000 Dana Foundation grant.

“Can we use this video game approach to help improve hippocampus functioning?” Stark asked. “It’s often suggested that an active, engaged lifestyle can be a real factor in stemming cognitive aging. While we can’t all travel the world on vacation, we can do many other things to keep us cognitively engaged and active. Video games may be a nice, viable route.”

A video about the research can be found here:

Reference: Journal of Neuroscience, Dec 9, 2015, in press: 10.1523/JNEUROSCI.2580-15.2015


Story Source:

The above post is reprinted from materials provided by University of California – Irvine. Note: Materials may be edited for content and length.

Want A Quick 3-D Copy Of Something? Camera Chip For Smartphone Provides Superfine 3-D Resolution

Want A Quick 3-D Copy Of Something? Camera Chip For Smartphone Provides Superfine 3-D Resolution

150403150704-large

Imagine you need to have an almost exact copy of an object. Now imagine that you can just pull your smartphone out of your pocket, take a snapshot with its integrated 3-D imager, send it to your 3-D printer, and within minutes you have reproduced a replica accurate to within microns of the original object. This feat may soon be possible because of a new, tiny high-resolution 3-D imager developed at Caltech.

Any time you want to make an exact copy of an object with a 3-D printer, the first step is to produce a high-resolution scan of the object with a 3-D camera that measures its height, width, and depth. Such 3-D imaging has been around for decades, but the most sensitive systems generally are too large and expensive to be used in consumer applications.

A cheap, compact yet highly accurate new device known as a nanophotonic coherent imager (NCI) promises to change that. Using an inexpensive silicon chip less than a millimeter square in size, the NCI provides the highest depth-measurement accuracy of any such nanophotonic 3-D imaging device.

The work, done in the laboratory of Ali Hajimiri, the Thomas G. Myers Professor of Electrical Engineering in the Division of Engineering and Applied Science, is described in the February 2015 issue of Optics Express.

In a regular camera, each pixel represents the intensity of the light received from a specific point in the image, which could be near or far from the camera—meaning that the pixels provide no information about the relative distance of the object from the camera. In contrast, each pixel in an image created by the Caltech team’s NCI provides both the distance and intensity information. “Each pixel on the chip is an independent interferometer—an instrument that uses the interference of light waves to make precise measurements—which detects the phase and frequency of the signal in addition to the intensity,” says Hajimiri.

The new chip utilizes an established detection and ranging technology called LIDAR, in which a target object is illuminated with scanning laser beams. The light that reflects off of the object is then analyzed based on the wavelength of the laser light used, and the LIDAR can gather information about the object’s size and its distance from the laser to create an image of its surroundings. “By having an array of tiny LIDARs on our coherent imager, we can simultaneously image different parts of an object or a scene without the need for any mechanical movements within the imager,” Hajimiri says.

Such high-resolution images and information provided by the NCI are made possible because of an optical concept known as coherence. If two light waves are coherent, the waves have the same frequency, and the peaks and troughs of light waves are exactly aligned with one another. In the NCI, the object is illuminated with this coherent light. The light that is reflected off of the object is then picked up by on-chip detectors, called grating couplers, that serve as “pixels,” as the light detected from each coupler represents one pixel on the 3-D image. On the NCI chip, the phase, frequency, and intensity of the reflected light from different points on the object is detected and used to determine the exact distance of the target point.

Because the coherent light has a consistent frequency and wavelength, it is used as a reference with which to measure the differences in the reflected light. In this way, the NCI uses the coherent light as sort of a very precise ruler to measure the size of the object and the distance of each point on the object from the camera. The light is then converted into an electrical signal that contains intensity and distance information for each pixel—all of the information needed to create a 3-D image.

The incorporation of coherent light not only allows 3-D imaging with the highest level of depth-measurement accuracy ever achieved in silicon photonics, it also makes it possible for the device to fit in a very small size. “By coupling, confining, and processing the reflected light in small pipes on a silicon chip, we were able to scale each LIDAR element down to just a couple of hundred microns in size—small enough that we can form an array of 16 of these coherent detectors on an active area of 300 microns by 300 microns,” Hajimiri says.

The first proof of concept of the NCI has only 16 coherent pixels, meaning that the 3-D images it produces can only be 16 pixels at any given instance. However, the researchers also developed a method for imaging larger objects by first imaging a four-pixel-by-four-pixel section, then moving the object in four-pixel increments to image the next section. With this method, the team used the device to scan and create a 3-D image of the “hills and valleys” on the front face of a U.S. penny—with micron-level resolution—from half a meter away.

In the future, Hajimiri says, that the current array of 16 pixels could also be easily scaled up to hundreds of thousands. One day, by creating such vast arrays of these tiny LIDARs, the imager could be applied to a broad range of applications from very precise 3-D scanning and printing to helping driverless cars avoid collisions to improving motion sensitivity in superfine human machine interfaces, where the slightest movements of a patient’s eyes and the most minute changes in a patient’s heartbeat can be detected on the fly.

“The small size and high quality of this new chip-based imager will result in significant cost reductions, which will enable thousands new of uses for such systems by incorporating them into personal devices such as smartphones,” he says.

Story Source:

The above story is based on materials provided by California Institute of Technology. The original article was written by Jessica Stoller-Conrad. Note: Materials may be edited for content and length.

Unprecedented 3-D View Of Important Brain Receptor

Unprecedented 3-D View Of Important Brain Receptor

Credit: Image courtesy of Oregon Health & Science University

Credit: Image courtesy of Oregon Health & Science University

Researchers with Oregon Health & Science University’s Vollum Institute have given science a new and unprecedented 3-D view of one of the most important receptors in the brain — a receptor that allows us to learn and remember, and whose dysfunction is involved in a wide range of neurological diseases and conditions, including Alzheimer’s, Parkinson’s, schizophrenia and depression.

The unprecedented view provided by the OHSU research, published online June 22 in the journalNature, gives scientists new insight into how the receptor — called the NMDA receptor — is structured. And importantly, the new detailed view gives vital clues to developing drugs to combat the neurological diseases and conditions.

“This is the most exciting moment of my career,” said Eric Gouaux, a senior scientist at the Vollum Institute and a Howard Hughes Medical Institute investigator. “The NMDA receptor is one of the most essential, and still sometimes mysterious, receptors in our brain. Now, with this work, we can see it in fascinating detail.”

Receptors facilitate chemical and electrical signals between neurons in the brain, allowing those neurons to communicate with each other. The NMDA (N-methyl-D-aspartate) receptor is one of the most important brain receptors, as it facilitates neuron communication that is the foundation of memory, learning and thought. Malfunction of the NMDA receptor occurs when it is increasingly or decreasingly active and is associated with a wide range of neurological disorders and diseases. Alzheimer’s disease, Parkinson’s disease, depression, schizophrenia and epilepsy are, in many instances, linked to problems with NMDA activity.

Scientists across the world study the NMDA receptor; some of the most notable discoveries about the receptor during the past three decades have been made by OHSU Vollum scientists.

The NMDA receptor makeup includes receptor “subunits” — all of which have distinct properties and act in distinct ways in the brain, sometimes causing neurological problems. Prior to Gouaux’s study, scientists had only a limited view of how those subtypes were arranged in the NMDA receptor complex and how they interacted to carry out specific functions within the brain and central nervous system.

Gouaux’s team of scientists — Chia-Hsueh Lee, Wei Lu, Jennifer Michel, April Goehring, Juan Du and Xianqiang Song — created a 3-D model of the NMDA receptor through a process called X-ray crystallography. This process throws x-ray beams at crystals of the receptor; a computer calibrates the makeup of the structure based on how those x-ray beams bounce off the crystals. The resulting 3-D model of the receptor, which looks something like a bouquet of flowers, shows where the receptor subunits are located, and gives unprecedented insight into their actions.

“This new detailed view will be invaluable as we try to develop drugs that might work on specific subunits and therefore help fight or cure some of these neurological diseases and conditions,” Gouaux said. “Seeing the structure in more detail can unlock some of its secrets — and may help a lot of people.”

Editors note: Original publication can be found here.