Scientists Build Nanostructures out of Single DNA Strands

first_img Explore further (PhysOrg.com) — With its unique double-helical structure, DNA has the ability to be used as a programmable building material to construct designer nanoscale architectures. Complex DNA architectures could have a variety of applications, from DNA-based nanomotors to biosensing and drug delivery. Taking the research a step forward, researchers have recently constructed a nanometer-sized tetrahedron from a single strand of DNA, using a method that could have advantages for assembling similar structures on a large scale. Front and top views of the 3D molecular model of the tetrahedron. Image copyright: Zhe Li, et al. This document is subject to copyright. Apart from any fair dealing for the purpose of private study or research, no part may be reproduced without the written permission. The content is provided for information purposes only. The researchers, from Arizona State University (ASU) and the Hong Kong University of Science and Technology (HKUST), have published their results in a recent issue of the Journal of the American Chemical Society. As the researchers explain, the variety of different artificial DNA constructions has been increasing. So far, 3D DNA nanostructures are made from multiple DNA strands (oligonucleotides) with deliberately designed sequences. In this new study, Hao Yan of ASU, Yongli Mi of HKUST, and their colleagues have shown that DNA tetrahedrons can now be self-folded from only a single DNA strand. In addition, they demonstrated a method to replicate the DNA tetrahedrons in vivo, which could also be applied to the design and replication of other DNA nanostructures in the future.“A self-folded 3D nanocage that can be replicated in vivo tells us how powerful nature’s machineries are,” Yan and Mi told PhysOrg.com. “DNA nanostructures can serve as scaffolds to organize other material with controlled spatial arrangement. Spatial dependent biomolecular/nanomaterial interactions can thus be tuned and studied.”The DNA tetrahedrons, made of four triangular faces, were constructed from a DNA strand that was 286 nucleotides long. The tetrahedron’s six edges were composed of double helices: five were identical (double helical), while the sixth edge had a more complex “twin double-helical” structure. Four of the edges contained a cleavable site in the center, and all four vertices consisted of an unpaired thymine base to allow adequate flexibility for folding at these corners. Once the DNA strand was paired in this way, the researchers annealed the DNA in a process of heating and then cooling. When annealed, the DNA strand self-assembled into the seven-nanometer-long tetrahedron shape by combining the appropriate base pairs together. After confirming the successful assembly of the DNA tetrahedron, the researchers then developed a method to replicate the nanostructures using in vivo cloning in order to produce the nanostructures on a large scale. The researchers inserted one of the tetrahedrons into a cloning molecule called a phagemid, and then recovered several replicated tetrahedrons through a process of restriction digestion of the phagemid. This method is fully scalable, with the yield of cloned structures proportional to the size of the culture medium. As the researchers explain, using only a single DNA strand for creating nanostructures has several advantages, including simplifying the assembly process, increasing yield, offering the ability to scale up production, and creating structures with longer life spans in biological systems, such as inside living cells. This property is especially appealing for in vivo applications such as biosensing and drug delivery. In the future, the researchers hope to build on this method to synthesize nanostructures out of RNA, as well as build other complex shapes.More information: Zhe Li, Bryan Wei, Jeanette Nangreave, Chenxiang Lin, Yan Liu, Yongli Mi, and Hao Yan. “A Replicable Tetrahedral Nanostructure Self-Assembled from a Single DNA Strand.” J. Am. Chem. Soc. Doi: 10.1021/ja903768fCopyright 2009 PhysOrg.com. All rights reserved. This material may not be published, broadcast, rewritten or redistributed in whole or part without the express written permission of PhysOrg.com. Citation: Scientists Build Nanostructures out of Single DNA Strands (2009, September 8) retrieved 18 August 2019 from https://phys.org/news/2009-09-scientists-nanostructures-dna-strands.html Using living cells as nanotechnology factorieslast_img read more

Continue reading

Company Claims ESLs to be the Future of Light Bulbs w Video

first_img(PhysOrg.com) — While compact fluorescent lights (CFLs) are currently the primary alternative to incandescent light bulbs, a company from Seattle predicts that its own novel light bulbs will eventually replace CFLs and LEDs. Vu1 (“view one”) Corporation has been working on its electron stimulated luminescence (ESL) bulbs, and has recently released a demo video (below). On the other hand, LEDs don’t contain hazardous materials like mercury, and can last for up to 50,000 hours. However, their heat dissipation requirements make them more expensive than other bulbs, with a projected retail price of about $40 each.In contrast, ESLs don’t contain hazardous substances and should cost about $20, or the equivalent of a dimmable CFL reflector bulb, according to Vu1. Similar to CFLs, ESLs use 65% less energy than incandescent bulbs, and last for up to 6,000 hours, or about four times the lifespan of incandescents. Other advantages of ESLs include a warm color temperature similar to incandescent light, as well as the ability to be turned on instantly and be fully dimmable. Vu1 plans to begin manufacturing ESLs by the end of the year, and hopes to market the bulb starting in mid-2010. Initially, the company will launch reflector-shaped bulbs, which are used in recessed lighting. Later they hope to expand into other bulb forms such as standard A bulbs and tubes. More information: www.vu1.comvia: CNet Crave© 2009 PhysOrg.com Citation: Company Claims ESLs to be the Future of Light Bulbs (w/ Video) (2009, September 16) retrieved 18 August 2019 from https://phys.org/news/2009-09-company-esls-future-bulbs-video.html Explore further ‘Light within a light’ offers CFL efficiency with incandescent bulb shapecenter_img ESL technology works by firing electrons at phosphor, which then glows. As Vu1 explains, the technology is similar to that used in cathode ray tubes and TVs. However, the bulbs have several improvements, such as in uniform electron distribution, energy efficiency, phosphor performance and manufacturing costs. “CRT and TV technology is based on delivering an electron ‘beam’ and then turning pixels on and off very quickly,” the company explains on its website. “ESL technology is based on uniformly delivering a ‘spray’ of electrons that illuminate a large surface very energy efficiently over a long lifetime.”With ESLs, Vu1 hopes to overcome some of the challenges faced by CFLs and LEDs, the two lighting technologies considered to have the most potential in the post-incandescent era. As the company explains, CFLs’ biggest problem is that they contain about 5 milligrams of mercury. If not recycled properly – or if they’re accidentally broken – CFLs release mercury into the air or groundwater. In addition, some people find the CFLs’ cooler colors less pleasing than the warmer tones of incandescent bulbs. Vu1’s conceptual design for its R-30 bulb. Credit: Vu1. This document is subject to copyright. Apart from any fair dealing for the purpose of private study or research, no part may be reproduced without the written permission. The content is provided for information purposes only.last_img read more

Continue reading

Nearthreshold computing could enable up to 100x reduction in power consumption

first_img Copyright 2010 PhysOrg.com. All rights reserved. This material may not be published, broadcast, rewritten or redistributed in whole or part without the express written permission of PhysOrg.com. In a recent study, a team of researchers, Ronald Dreslinski, et al., from the University of Michigan, have investigated a solution to the power problem by using a method called near-threshold computing (NTC). In the NTC method, electronic devices operate at lower voltages than normal, which reduces energy consumption. The researchers predict that NTC could enable future computer systems to reduce energy requirements by 10 to 100 times or more, by optimizing them for low-voltage operation. Unfortunately, low-voltage operation also involves performance trade-offs: specifically, performance loss, performance variation, and memory and logic failures. Continuing Moore’s lawAs the researchers explain, reducing power consumption is essential for allowing the continuation of Moore’s law, which states that the number of transistors on a chip doubles about every two years. Continuing this exponential growth is becoming more and more difficult, and power consumption is the largest barrier to meaningful increases in chip density. While engineers can design chips to hold additional transistors, power consumption has begun to prohibit these devices from actually being used. As the researchers explain, engineers are currently facing “a curious design dilemma: more gates can now fit on a die, but a growing fraction cannot actually be used due to strict power limits. … It is not an exaggeration to state that developing energy-efficient solutions is critical to the survival of the semiconductor industry.” In the past, technologies that required large amounts of power eventually became replaced by more energy-efficient technologies; for example, vacuum tubes were replaced by transistors. Today, transistors are arranged using CMOS (complementary metal-oxide-semiconductor) circuitry design techniques. Since beyond-CMOS technologies are still far from being commercially viable, and large investments have been made in CMOS-based infrastructure, the Michigan researchers predict that CMOS will likely be around for a while. For this reason, solutions to the power problem must come from within. Using Moore’s law as the metric of progress has become misleading: starting around the 65-nm node, improvements in packing densities no longer translate to proportional increases in performance or energy efficiency. Researchers predict that near-threshold computing could restore the relationship between transistor density and energy efficiency. Credit: Dreslinski, et al. ©2010 IEEE. “NTC is an enabling technology that allows for continued scaling of CMOS-based devices, while significantly improving energy efficiency,” Dreslinski told PhysOrg.com. “The major impact of the work is that, for a fixed battery lifetime, significantly more transistors can be used, allowing for greater functionality. Particularly, [NTC allows] the full use of all transistors offered by technology scaling, eliminating ‘Dark Silicon’ that occurs as we scale to future technology nodes beyond 22 nm where ’more transistors can be placed on chip, but will be unable to be turned on concurrently.’”Operating at threshold voltageNear-threshold computing could be the key to decreasing power requirements without overturning the entire CMOS framework, the researchers say. Although low-voltage computing is already popular as an energy-efficient technique for ultralow-energy niche markets such as wristwatches and hearing aids, its large circuit delays lead to large energy leakages that have made it impractical for most computing segments. So far, these ultralow-energy devices have operated at extremely low “subthreshold” voltages, from around 200 millivolts down to the theoretical lower limit of 36 millivolts. Conventional voltage operation is about 1.0 volts. Meanwhile, near-threshold operation occurs around 400-500 millivolts, or near a device’s threshold voltage.Operating at near-threshold rather than subthreshold voltages could provide a compromise, enabling devices to require less energy while minimizing the energy leakage. This improved trade-off could potentially open up low-voltage design to mainstream semiconductor products. However, near-threshold computing still faces the other three challenges mentioned earlier: a 10 times performance loss, five times increase in performance variation, and an increase in functional failure rate of five orders of magnitude. These challenges have not been widely addressed so far, but the Michigan researchers spend the bulk of their analysis reviewing the current research to overcome these barriers.Part of the attraction of near-threshold computing is that it could have nearly universal applications in high-demand segments, such as data centers and personal computing. As the Web continues to grow, more data centers and servers are needed to host websites, and their power consumption is currently doubling about every five years. Personal computing devices, many of which are portable, could also benefit from increased battery lifetime due to reduced power needs. Dreslinski notes that previous studies have shown that the impact of NTC on devices will vary based on a particular consumer’s usage. “A user who only uses their device for making phone calls won’t see much impact because most of the power is consumed outside the CPU,” he said. “However, users who utilize music/video players and other compute-intensive tasks on their phone could see significant battery life improvements and reduced heat generated by the device. Quantifying these numbers is difficult based on the varying workloads of users coupled with parallel advances in battery technologies. My unofficial estimate would be a 1.5x to 2x improvement in battery lifetime, although some users could see significantly more or less.”Near-threshold computing could also be useful in sensor-based systems, which have applications in biomedical implants, among other areas. While these sensors may have a size of about 1 mm3, they often require batteries that are many orders of magnitude larger than the electronics themselves. By reducing the power requirements by up to 100 times in sensors, near-threshold computing could open the doors to many possible future designs. (PhysOrg.com) — While electronic devices have greatly improved in many regards, such as in storage capacity, graphics, and overall performance, etc., they still have a weight hanging around their neck: they’re huge energy hogs. When it comes to energy efficiency, today’s computers, cell phones, and other gadgets are little better off than those from a decade ago, or more. The problem of power goes beyond being green and saving money. For electrical engineers, power has become the primary design constraint for future electronic devices. Without lowering power consumption, improvements made in other areas of electronic devices could be useless, simply because there isn’t enough power to support them. This document is subject to copyright. Apart from any fair dealing for the purpose of private study or research, no part may be reproduced without the written permission. The content is provided for information purposes only.center_img Explore further Nanotech SRAM for battery devices unveiled Citation: Near-threshold computing could enable up to 100x reduction in power consumption (2010, February 17) retrieved 18 August 2019 from https://phys.org/news/2010-02-near-threshold-enable-100x-reduction-power.html More information: Ronald G. Dreslinski, Michael Wieckowski, David Blaauw, Dennis Sylvester, Trevor Mudge. “Near-Threshold Computing: Reclaiming Moore’s Law Through Energy Efficient Integrated Circuits.” Proceedings of the IEEE. Vol. 98, No. 2, February 2010. Doi:10.1109/JPROC2009.2034764last_img read more

Continue reading

Dingoes like wolves are smarter than pet dogs

first_imgDingo The dingo is considered a “pure” prehistoric dog, which was brought to Australia tens of thousands of years ago by the Aborigines. While they have in the past been associated with humans, they have adapted to surviving “wild” in the Australian outback. The dingo lies somewhere between the wolf, its ancient ancestor, and the domestic or pet dog, and has cognitive differences between the two. There has been little research done on dingoes, even though studies would aid in the understanding of the evolution of dogs, and it was unknown whether the dingo was more “wolf-like” or “dog-like”.Researchers in South Australia have now subjected the Australian dingo (Canis dingo) to the classic “detour task,” which has been used by previous researchers to assess the abilities of wolves (Canis lupus) and domestic dogs (Canis familiaris) to solve non-social, spatial problems. The detour task involves placing a treat behind a transparent or wire mesh fence. The dog can see the food but cannot get to it directly and has to find its way along the fence and through a door and then double back to get the food. Previous research has shown wolves are adept at solving the problem quickly, while domesticated dogs generally perform poorly and fail to improve significantly even after repeated trials. The wolves were also able to adapt easily when conditions were reversed, but pet dogs also generally fared poorly at this task.Until now dingoes had not been tested, so lead researcher, PhD student Mr. Bradley Smith of the School of Psychology at the University of South Australia, decided to subject 20 sanctuary-raised dingoes (Canis dingo) to the V-shaped detour task, in which a V-shaped fence is the barrier to the treat (a bowl of food) placed at the intersection point of the V, and the detour doors swung either inward or outward.The dingoes were randomly assigned to one of four experimental conditions previously used to test dogs and wolves. These were the inward or outward detour (with doors closed), inward detour (with doors open), and inward detour (with a human demonstrator). Each dingo was tested four times and then given a fifth trial with the conditions reversed.The results showed the dingoes completed the detour tasks successfully, and they achieved fewer errors and solved the problems more quickly (in around 20 seconds) than domestic dogs tested in previous research. Unlike domesticated dogs in previous studies, the dingoes did not look to humans for help, and only one dingo even looked at the human when solving the problem. This behavior was much more similar to findings with wolves than for pet dogs.The findings were published in the journal Animal Behaviour. All tests were carried out at the Dingo Discovery Centre in Victoria. (PhysOrg.com) — Studies in the past have shown that wolves are smarter than domesticated dogs when it comes to solving spatial problems, and now new research has shown that dingoes also solve the problems well. Citation: Dingoes, like wolves, are smarter than pet dogs (2010, June 11) retrieved 18 August 2019 from https://phys.org/news/2010-06-dingoes-wolves-smarter-pet-dogs.html © 2010 PhysOrg.com More information: References: — dx.doi.org/10.1016/j.anbehav.2010.04.017 — courses.media.mit.edu/2003spri … ciallearningdogs.pdf– dx.doi.org/10.1006/anbe.2001.1866 Study challenges popular image of dingo Explore further This document is subject to copyright. Apart from any fair dealing for the purpose of private study or research, no part may be reproduced without the written permission. The content is provided for information purposes only.last_img read more

Continue reading

Modeling the miniscule Highresolution design of nanoscale biomolecules

first_img The research team – Adelene Y. L. Sim in the Department of Applied Physics, and Prof. Michael Levitt and Dr. Peter Minary in the Department of Structural Biology – faced a range of challenges in devising their unique algorithm. Speaking with PhysOrg, Minary and Sim describe those challenges. “Reducing dimensionality may eliminate physically relevant paths connecting conformational basins and therefore introducing artificial energy barriers that do not present obstacles in Cartesian space,” Minary tells PhysOrg. “In the present case, the major challenge was to develop an algorithm that supports degrees of freedom representing arbitrary collective rearrangements at all-atom resolution.”Unfortunately, Minary notes, using these degrees of freedom, or DOFs, could break chain connectivity – and the corresponding conformational space is likely to be associated with extremely rough energy surface topology. “To overcome these limitations,” he adds, “less collective rearrangements need to be utilized only at a necessary extent so that rearrangements along the more collective DOFs are optimally facilitated without significantly increasing the volume of the sampled conformational space.” In short, their major challenge was implementing a universal algorithm capable of exploring conformational space while allowing numerous sets of arbitrary and/or user-defined so-called natural DOFs.The team addressed these issues, Minary says, by building on the preexisting high-level computational environment of the MOSAICS software package that enabled the use of arbitrary even chain breaking DOFs. “To further improve on this concept,” he adds, “a very flexible new interface had to be invented that welcomes users to define their own system specific DOFs. In addition, the interface also had to support the weighted superposition of arbitrary DOFs. Finally a universal algorithm that realizes the interaction of various sets of DOFs needed to be implemented.” By so doing, conformational paths along the most collective molecular rearrangements are augmented by the incorporation of progressively more detailed molecular flexibility without significantly altering the dimensionality problem, which is better quantified by the conformational volume to be sampled rather than the actual number of DOFs. Effects of adding hierarchical degrees of freedom on sampling a large symmetric RNA structure. (A) Hierarchical moves used. A system of this complexity has many possible collective motions. Here seven sets of independent degrees of freedom (L1 to L7) are defined. (B) Convergence is accelerated by higher order rigid body moves. When nested hierarchical moves L1 to L7 were used, rapid convergence to the limiting is reached within 2 × 104 iterations (vertical dashed line labeled *). Image Copyright © PNAS, doi: 10.1073/pnas.1119918109 Other innovations are also in the works. “In the current paper we showed that our algorithm satisfies some necessary conditions of phase space, or detailed balance, preserving sampling not satisfied by any of the available algorithms used to model RNA systems,” Minary notes. “Further efforts are invested to fully satisfy microscopic reversibility.” Moreover, computational efficiency may be improved by using information on the collective nature of DOFs when updating atomic interactions, or by defining energy functional forms in terms of low dimensional analytical coordinates. Minary points out that sampling efficiency could also be improved if the current approach is combined with some advanced sampling algorithms based on multi-canonical sampling available in MOSAICS.In addition, he continues, the movement of explicit water could be incorporated into the hierarchical moves so that effects of solvation can be more accurately evaluated – and testing the method with various implicit solvent representations may also be informative. “Finally,” he says, “we’re planning to introduce a more user friendly – possibly graphical – interface that would bridge the gap between algorithm developers and computational biologists, physicists and chemists who have great insight and intuitions about the natural DOFs of various molecular assemblies and complexes.” Altogether, all the above efforts, which would increase mathematical rigor, computational speed, solvent details and accessibility to users, could further extend the boundaries of applications beyond the current systems being considered.In the meantime, while developing all the necessary algorithms discussed above, the team plans to continue extending the range of target applications. “Besides modeling the structure of chromatin,” Minary illustrates, “we’d like to revisit questions in DNA nanotechnology.” Furthermore, the use of a refinement method other than Cryo-EM (Cryo-Electron Microscopy, a form of transmission electron microscopy where samples are studied at cryogenic temperatures, and which the team is already pursuing) is also planned. “We intend to extend our work to extensively explore RNA junction flexibility,” adds Sim, “and are also currently looking into using our technique in RNA structure prediction of large RNA systems.” In terms of applications, Sim continues, “in medicine it’s vital to understand the flexibility, stability, shape and possible distortions of nanostructures to better evaluate the nanostructure quality. These properties could play crucial roles in dictating cellular internalization and/or toxicity of nanostructures.”Sim points out that with their efficient modeling tool, although still dependent on the quality of the force field used, the team is now more capable of studying these properties in silico. “Additionally,” Sim notes, “we’re looking into optimization in sequence- and structure-space simultaneously by having sequence as an additional degree of freedom.” A possible application is the sequence design of silencing RNA, or siRNA.Looking further afield, Minary tells PhysOrg, there are other technologies and applications that might benefit from their findings. “Since proper sampling and exploration of the conformational space is a basic tool used in various technologies and applications, the method could be used in design, homology modeling and various new applications such as modeling collective rearrangements in trans-membrane proteins, designing new nucleic acid nanostructures, modeling large protein-nucleic acid assemblies, such a the ribosome, and the in silico study of chromatin remodeling. In addition,” he adds, “we’d like to aid the refinement and interpretation of experimental techniques.” Specifically, building on former efforts to refine Cryo-EM data, they’d like to develop tools to analyze NMR, FRET, SAXS, X-Ray, and footprinting experiments in order to generate conformational ensembles that satisfy experimental constraints. Finally, Minary points out that the algorithm they developed is very general in nature and could be also utilized in other disciplines that involve state spaces with a large number of variables that are changing in a correlated manner. “In particular,” he concludes, “the basic idea could be used but not limited to sampling the space of possible networks, as in systems biology applications, or stock market variables.” More information: Modeling and design by hierarchical natural moves, PNAS February 21, 2012 vol. 109 no. 8 2890-2895, doi: 10.1073/pnas.1119918109Related:The effects of polymeric nanostructure shape on drug delivery, Advanced Drug Delivery Reviews, Volume 63, Issues 14–15, November 2011, Pages 1228–1246, doi: 10.1016/j.addr.2011.06.016 Square-Shaped RNA Particles from Different RNA Folds, Nano Letters, February 24, 2009 (Web), 9 (3), 1270–1277, doi: 10.1021/nl900261h Citation: Modeling the miniscule: High-resolution design of nanoscale biomolecules (2012, March 12) retrieved 18 August 2019 from https://phys.org/news/2012-03-miniscule-high-resolution-nanoscale-biomolecules.html Explore furthercenter_img Swimming upstream: Flux flow reverses for lattice bosons in a magnetic field This document is subject to copyright. Apart from any fair dealing for the purpose of private study or research, no part may be reproduced without the written permission. The content is provided for information purposes only. (PhysOrg.com) — A key element of both biotechnology and nanotechnology is – perhaps unsurprisingly – computational modeling. Frequently, in silico nanostructure design and simulation precedes actual experimentation. Moreover, the ability to use modeling to predict biomolecular structure lays the foundation for the subsequent design of biomolecules. Historically, the problem has been that most modeling software presents a tradeoff between being general purpose (in being able to model systems at high/atomic resolution) but limited in scope (i.e., only explores a small fraction conformational space around the initial structure). Recently, however, Stanford University scientists have developed an algorithm – implemented in a modeling program known as MOSAICS (Methodologies for Optimization and SAmpling In Computational Studies) – that achieves nanoscale modeling at the resolution required without being limited by the scope/size dilemma. In addition, the researchers successfully modeled – and benchmarked the new computation modeling technique with – RNA-based nanostructures. Copyright 2012 PhysOrg.com. All rights reserved. This material may not be published, broadcast, rewritten or redistributed in whole or part without the express written permission of PhysOrg.com.last_img read more

Continue reading

Email use model appears to follow Clash of Civilizations prediction

first_img Citation: E-mail use model appears to follow “Clash of Civilizations” prediction (2013, March 8) retrieved 18 August 2019 from https://phys.org/news/2013-03-e-mail-clash-civilizations.html (Phys.org) —Researchers at Stanford University have built a model based on the frequency of e-mail interactions between groups of users of Yahoo! e-mail throughout the world. In studying their results, they have found, as they report in their paper they’ve uploaded to the preprint server arXiv, it appears to adhere to societal boundaries as described by Samuel Huntington’s 1992 book “The Clash of Civilizations.” The Mesh of Civilizations. Source: Yahoo! email dataset. Rescaled densities. Only top 1,000 densities displayed. Credit: arxiv.org/abs/1303.0045 Journal information: arXiv Yahoo tries to entice users with e-mail facelift More information: The Mesh of Civilizations and International Email Flows, arXiv:1303.0045 [cs.SI] arxiv.org/abs/1303.0045AbstractIn The Clash of Civilizations, Samuel Huntington argued that the primary axis of global conflict was no longer ideological or economic but cultural and religious, and that this division would characterize the “battle lines of the future.” In contrast to the “top down” approach in previous research focused on the relations among nation states, we focused on the flows of interpersonal communication as a bottom-up view of international alignments. To that end, we mapped the locations of the world’s countries in global email networks to see if we could detect cultural fault lines. Using IP-geolocation on a worldwide anonymized dataset obtained from a large Internet company, we constructed a global email network. In computing email flows we employ a novel rescaling procedure to account for differences due to uneven adoption of a particular Internet service across the world. Our analysis shows that email flows are consistent with Huntington’s thesis. In addition to location in Huntington’s “civilizations,” our results also attest to the importance of both cultural and economic factors in the patterning of inter-country communication ties.via Arxiv Blogcenter_img This document is subject to copyright. Apart from any fair dealing for the purpose of private study or research, no part may be reproduced without the written permission. The content is provided for information purposes only. © 2013 Phys.org Huntington famously suggested in his book that future wars would revolve around cultural and religious differences and even offered a list of groups of people in them: Sinic, Hindu, Islamic, Latin American, Western, Orthodox, African and Buddhist.The researchers at Stanford, led by Bogdan State, didn’t set out to create a model that would reflect Huntington’s vision, but instead found it came about on its own after the data was compiled and graphed. Their model is based on over ten million e-mail messages sent from Yahoo! users the world over. To show the degree of interaction between groups, the team used nodes and lines between them—the more transactions between groups, the closer they appear together on the model. They also carefully note that only Yahoo! users that agreed to have their data used in the study were included. To form geographic areas, the team compared IP numbers attached to messages with the location noted in a user’s profile, using only those that coincided.The resulting color-coded graphic model offers near instant visual clues regarding groups bound together by culture and perhaps religion. Perhaps more importantly it also shows boundaries, which State and his team claim, resemble the model first proposed by Huntington. Western nodes are clustered to form a single group with just a few outliers, for example, as are others such as those deemed Islamic, or South American.The model doesn’t hint at tensions between groups of course, but does seem to indicate that groups tend to communicate more via e-mail with others in their same group than they do with others from other groups, even if they share a physical border. Other patterns that show up indicate what would seem natural—that people who speak the same language tend to send more e-mails to each other than to people who don’t. People in Great Britain for example, appear to send more e-mails to people in Australia than to people in other, much closer, European countries. Explore furtherlast_img read more

Continue reading

Amphibious fish found to use evaporative cooling to overcome hot water

first_img Citation: Amphibious fish found to use evaporative cooling to overcome hot water (2015, October 21) retrieved 18 August 2019 from https://phys.org/news/2015-10-amphibious-fish-evaporative-cooling-hot.html Mangrove rivulus. Credit: Wikipedia There are many varieties of amphibious fish—those fish that jump or crawl out of the water to hang out on land for awhile, but until now, no one has seen an example of one that jumps out of the water to use evaporative cooling to chill its body after swimming in water that was too hot. Prior research had shown that mangrove rivulus jump (or more accurately flip themselves out of the water by bending then releasing quickly) but it was not clear why they did so—other amphibious fish have been known to get out of the water if CO2 build up, or if there were pollutants, or even to snag a meal, but that did not seem to apply to the mangrove rivulus.To find out more about the fish (which look sort of like tadpoles) the researchers raised some specimens for a year in tanks in their lab, at a temperature of 25 or 30°C and also collected wild adults and put them in tanks in their lab too, and acclimated them for a time at the same temperature as those they had raised. Then, they watched and filmed (with a thermal imaging camera) what happened as the temperature of the water was raised. The fish, as expected hurled themselves out of the tank onto “shore”—when the temperature reached approximately 36°C. The researchers also enclosed the fish tanks so that they could create different levels of humidity and found that the fish cooled better in lower humidity environments. They also found that despite high humidity, the fish could all cool themselves down to ambient temperature within minutes. In studying the fish, the researchers found that not only was it able to use evaporative cooling, but its behavior also demonstrated plasticity, because it was dependent on recent acclimation history rather than conditioning when they were young. This suggests the fish is remarkably well suited to handling warmer waters as the planet heats up. More information: Out of the frying pan into the air—emersion behaviour and evaporative heat loss in an amphibious mangrove fish (Kryptolebias marmoratus), Biology Letters, Published 21 October 2015.DOI: 10.1098/rsbl.2015.0689 AbstractAmphibious fishes often emerse (leave water) when faced with unfavourable water conditions. How amphibious fishes cope with the risks of rising water temperatures may depend, in part, on the plasticity of behavioural mechanisms such as emersion thresholds. We hypothesized that the emersion threshold is reversibly plastic and thus dependent on recent acclimation history rather than on conditions during early development. Kryptolebias marmoratus were reared for 1 year at 25 or 30°C and acclimated as adults (one week) to either 25 or 30°C before exposure to an acute increase in water temperature. The emersion threshold temperature and acute thermal tolerance were significantly increased in adult fish acclimated to 30°C, but rearing temperature had no significant effect. Using a thermal imaging camera, we also showed that emersed fish in a low humidity aerial environment (30°C) lost significantly more heat (3.3°C min−1) than those in a high humidity environment (1.6°C min−1). In the field, mean relative humidity was 84%. These results provide evidence of behavioural avoidance of high temperatures and the first quantification of evaporative cooling in an amphibious fish. Furthermore, the avoidance response was reversibly plastic, flexibility that may be important for tropical amphibious fishes under increasing pressures from climatic change.Press release This document is subject to copyright. Apart from any fair dealing for the purpose of private study or research, no part may be reproduced without the written permission. The content is provided for information purposes only. Journal information: Biology Letterscenter_img (Phys.org)—A team of researchers affiliated with the University of Guelph and Brock University, both in Canada, has found the first example of an amphibious fish using evaporative cooling to chill its overheated body. In their paper published in the journal Biology Letters, the researchers describe their study that included raising Kryptolebias marmoratus, aka mangrove rivulus, to adulthood and then testing them by heating the water in which they lived. © 2015 Phys.org Explore further Flipping fish adapt to land living (w/ Video)last_img read more

Continue reading

Autumn magic with ethereal whites

first_imgDesigner Aneeth Arora’s collection Pero Spring 2014 is on display at Moon River in the Capital. The collection consists of an exclusive array of ethereal whites with vintage beading. Arora’s particular brand of anti-fit, frumpy chic clothing is a hardsell in India, where the average person won’t understand the concept of paying four figures for a mulmul shirt. However, her understanding of delicate textiles and persistent effort in elevating their status has earned her fans among international buyers and won her Vogue India’s first-ever Fashion Fund Award. All of which combined has turned her into a hot ticket on the Indian fashion circuit.Alhough Arora is not one to experiment with silhouettes or colours, this collection offered far more variety than her usual efforts. Her dainty slip dresses are ephemeral, but her collection also includes androgynous pant suits, decked out in hot pink gingham checks. Even saris made it to the final cut, polka dotted and worn with soft jackets instead of the traditional choli blouse.last_img read more

Continue reading

Responsibility towards heritage

first_imgLending support to the Government’s efforts to promote cleanliness, GAIL (India) Limited has adopted two historical monuments in Delhi- Purana Quila and Safdarjung Tomb – for upkeep and sanitation purposes as part of the Swachh Bharat Abhiyaan. It is a part of GAILS’s corporate social responsibility. Swachh Bharat Abhiyaan is a national level campaign by the Government of India, covering 4041 statutory towns to clean the streets, roads and infrastructure of the country GAIL employees have also been regularly participating in cleanliness drives in and around the company’s corporate office at Bhikaiji Cama Place for atleast two hours every week since the commencement of the Swachh Bharat Abhiyaan. Also Read – ‘Playing Jojo was emotionally exhausting’Activities such as cleaning of office peripheral areas, roads, railway stations, government schools, beaches, villages, delivering lectures to school going children on health tips, sensitizing contract personnel regarding importance of hygiene, etc. are being carried out at GAIL work sites across the country. In addition to this, GAIL (India) Limited has taken over the work of constructing over 1,000 toilets in schools across the country in a significant move towards fulfilling the PM’s commitment towards providing hygienic sanitation facilities for girl students. GAIL (India) Limited is the largest state-owned natural gas processing and distribution company in India.last_img read more

Continue reading

Exercise helps people overpower depression

first_imgThe study divided 62 individuals with diagnosed clinical depression into three groups, in which two participated in two different types of exercise with a physiotherapist twice a week for 10 weeks while the third, the control group, did not participate in systematic exercise.”In our follow-up interviews for the study, participants spoke about how they felt alive again and became more active. One woman expressed… the workout ‘kickstarts my body and helps me get the strength to crawl out of this cocoon that I am in’,” said PhD student Louise Danielsson.  Also Read – ‘Playing Jojo was emotionally exhausting’People who participated in exercise aimed at increasing their physical fitness clearly improved their mental health compared with the control group. Even participants who were taught about their basal body awareness by the physiotherapist reduced their depressive symptoms, although not as significantly.The studies show that the participants who exercised felt that they had the strength to do more at home and engage with more social contacts.”Our results show that exercise can be used within primary care with the rehabilitation of people with depression,” Danielsson said.last_img read more

Continue reading