Saturday, September 19, 2009

A Brief History of Time

A Brief History of Time - Stephen Hawking
An amazing and fascinating documentary about Stephen Hawking's ideas from his book titled the same and about his life.

A Brief History of Time attempts to explain a range of subjects in cosmology, including the Big Bang, black holes, light cones and superstring theory, to the nonspecialist reader. Its main goal is to give an overview of the subject but, unusual for a popular science book, it also attempts to explain some complex mathematics. The author notes that an editor warned him that for every equation in the book the readership would be halved, hence it includes only a single equation: E = mc2. In addition to Hawking's abstinence from equations, the book also simplifies matters by means of illustrations throughout the text, depicting complex models and diagrams.
In 1991, Errol Morris directed a documentary film about Hawking, but although they share a title, the film is a biographical study of Hawking, and not a filmed version of the book - and here it is;

Watch Documentary - A Brief History of Time - Prof. Stephen Hawking

Fun Notes;Before the book was published, an editor warned him that for every equation in the book the readership would be halved, hence it includes only a single equation: E = mc²

Watch Free Full Length Movies on Well Worth Watching
wellworthwatching movies

Thursday, September 17, 2009

Consciousness, Creativity & the Brain

Exploring the frontiers of Consciousness, creativity and the Brain - a discourse by Professor John Hagelin Phd. Professor Hagelin is one of the present day's Geniuses as far as understanding Consciousness from both scientific and Spiritual perspectives goes. Professor Hagelin is also a member of the Maharishi Foundation.

John Hagelin (born June 9, 1954) is an American scientist who was a researcher at the European Center for Particle Physics (CERN) and the Stanford Linear Accelerator Center (SLAC), is an educator and author, and has been the Natural Law Party candidate for President of the United States three times. Hagelin is Professor of Physics and Director of the Institute of Science, Technology and Public Policy at Maharishi University of Management, Executive Director of the International Center for Invincible Defense, President of the US Peace Government, Raja of Invincible America, and executive director of Global Financial Capital

Watch Lecture - Exploring the frontiers of Consciousness, creativity and the Brain

In 1981, Hagelin received his Ph.D. from Harvard, having already published five serious papers on particle theory. That same year, Hagelin won a postdoctoral research appointment at CERN (the European Center for Particle Physics) in Switzerland, and in 1983 was recruited by SLAC (the Stanford Linear Accelerator Center), CERN's North American counterpart.

In 1984, Hagelin shifted his appointment from SLAC to Maharishi International University (MIU), where he continued his research in physics, pursued a long-time interest in brain and cognitive science research, and established an accredited doctoral program in theoretical physics. Hagelin’s move to MIU in 1984 surprised and puzzled his colleagues. Howard Georgi and John Ellis tried to talk him out of it. But, according to Georgi, Hagelin "continued to do good physics anyway.” Nobel Laureate, Sheldon Glashow was quoted in a 1992 article as saying, “His papers are outstanding. We read them before he went to MIU and we read them now.” Hagelin remained in contact with colleagues from Harvard, Stanford, and CERN, and continued to collaborate with them. While at MIU, his contributions to the field of theoretical physics were supported by funding from the National Science Foundation.

Currently, Hagelin teaches physics as Professor of Physics at Maharishi University of Management (formerly MIU) and serves as Director of the Institute of Science, Technology and Public Policy at that institution. Hagelin is also identified as the Founding President of Maharishi Central University, which was announced in 2007. Central University was under construction in Smith Center, Kansas at the site of a previously-announced Peace Palace until early 2008, when, according to Hagelin, the project was put on hold while the TM Organization dealt with the death of the Maharishi.

In 1987 and 1989, Hagelin published two papers in the MUM's Journal of Modern Science and Vedic Science on the relationship between physics and consciousness.These papers discuss the Vedic understanding of consciousness as a field and compare it with theories of the unified field derived by modern physics. Hagelin argues that these two fields have almost identical properties and quantitative structure, and he presents other theoretical and empirical arguments that the two fields are actually one and the same—specifically, that the experience of unity at the basis of the mind achieved during the meditative state is the subjective experience of the very same fundamental unity of existence revealed by unified field theories.

Part of the evidence Hagelin presents for this explanation is the body of research on the effects that practitioners of the Transcendental Meditation technique and of the more advanced TM-Sidhi program (which includes a practice called "Yogic Flying") have on measured parameters in society. This phenomenon is called the "Maharishi Effect". In these two papers he cites numerous studies of such effects, and in the summer of 1993, he himself conducted a large scale study of this type. Hagelin recruited approximately 4,000 TM-Sidhi program practitioners to the Washington D.C. area, where they practiced the TM Sidhi techniques twice daily in a group. Using data obtained from the District of Columbia Metropolitan Police Department for 1993 and the preceding five years (1988–1992), Hagelin and collaborators followed the changes in crime rates for the area before, during, and after the 6 weeks the group was gathered in Washington DC. In 1999, the study, which showed a highly statistically significant drop in predicted crime, controlling for effects of temperature changes, was published in Social Indicators Research.

Physicist Victor J. Stenger wrote in The Humanist that John Hagelin talks about "quantum consciousness" and that quantum consciousness is a "myth" that "should take its place along with gods, unicorns, and dragons as yet another product of the fantasies of people unwilling to accept what science, reason, and their own eyes tell them about the world."

Peter Woit writes in his book, Not Even Wrong: The Failure of String Theory And The Search For Unity In Physical Law, that "Virtually every theoretical physicist in the world" rejects Hagelin's attempt to identify the "unified field" of superstring theory with the Maharishi's "unified field of consciousness" as "utter nonsense, and the work of a crackpot"

In 1992, Hagelin was honored with a Kilby International Award for his work in particle physics leading to the development of supersymmetric grand unified field theories, for his innovative applications of advanced principles from control systems theory and optimization theory to digital sound reproduction, and for his research on human consciousness. Chris Anderson questioned the value of the award in an article about Hagelin published in Nature.

In 1994, Hagelin was selected for the Ig Nobel Prize for Peace, an annual parody award given for achievements that “first make people laugh and then make them think." The award was given for the experimental conclusions drawn from the Washington, D.C. study.

Wikipedia - John Hagelin

Watch Free Full Length Movies on Well Worth Watching

wellworthwatching movies

Saturday, September 12, 2009

The Mind, Machines, and Mathematics

Creativity: The Mind, Machines, and Mathematics: A Public Debate
November 30, 2006 Running Time: 0:59:10
About the Lecture
Two of the sharpest minds in the computing arena spar gamely, but neither scores a knockdown in one of the oldest debates around: whether machines may someday achieve consciousness. (NB: Viewers may wish to brush up on the work of computer pioneer Alan Turing and philosopher John Searle in preparation for this video.)

Ray Kurzweil confidently states that artificial intelligence will, in the not distant future, “master human intelligence.” He cites the “exponential power of growth in technology” that will enable both a minute, detailed understanding of the human brain, and the capacity for building a machine that can at least simulate original thought. The “frontier” such a machine must cross is emotional intelligence—“being funny, expressing loving sentiment…” And when this occurs, says Kurzweil, it’s not entirely clear that the entity will have achieved consciousness, since we have no “consciousness detector” to determine if it is capable of subjective experiences.

Acknowledging that his position will prove unpopular, David Gelernter launches his attack: “We won’t even be able to build super-intelligent zombies unless we approach the problem right.” This means admitting that a continuum of cognitive styles exists among humans. As for building a conscious machine, he sees no possibility of one emerging from even the most sophisticated software. “Consciousness means the presence of mental states strictly private with no visible functions or consequences. A conscious entity can call on a thought or memory merely to feel happy, be inspired, soothed, feel anger…” Software programs, by definition, can be separated out, peeled away and run in a logically identical way on any computing platform. How could such a program spontaneously give rise to “a new node of consciousness?”

Kurzweil concedes the difficulty of defining consciousness, but does not want to wish away the concept, since it serves as the basis for our moral and ethical systems. He maintains his argument that reverse engineering of the human brain will enable machines that can act with a level of complexity, from which somehow consciousness will emerge.

Gelernter replies that believing this “seems a completely arbitrary claim. Anything might be true, but I don’t see what makes the claim plausible.” Ultimately, he says, Kurzweil must explain objectively and scientifically what consciousness is -- “how it’s created and got there.” Kurzweil stakes his claim on our future capacity to model digitally the actions of billions of neurons and neurotransmitters, which in humans somehow give rise to consciousness. Gelernter believes such a machine might simulate mental states, but not actually pass muster as a conscious entity. Ultimately, he questions the desirability of building such computers: “We might reach the state some day when we prefer the company of a robot from Walmart to our next-door neighbor or roommates.”

Cognitive Neuroscience of Aging


"This is an ambitious undertaking...chapters dense in information, but actually it works..."--The Psychologist
"This excellent book marks the advent of a new discipline, the cognitive neuroscience of aging. It comprehensively covers measurement tools, empirical findings, and theoretical models. Editors and authors are leading scholars of this evolving discipline. I highly recommend this book to everyone interested in the intriguing dynamic between brain and cognition in old age." -Ulman Lindenberger, Professor of Psychology, Max Planck Institute for Human Development and Director, Center for Lifespan Development
"This is the right book, by the right authors, at the right time. The editors have assembled most of the leading investigators taking a neuroscience approach to the study of cognitive aging, and have asked them to write integrative reviews of the existing literature and to speculate about productive directions for future research. The result is not only a compendium of, in the editors' words "state-of-the-art knowledge about the cognitive neuroscience of aging in 2004," but a valuable source of ideas for research over the next 5 to 10 years." -Timothy Salthouse, Brown-Forman Professor of Psychology, University of Virginia

Product Description
Until very recently, what we knew about the neural basis of cognitive aging was based on two disciplines that had very little contact with each other. Whereas the neuroscience of aging investigated the effects of aging on the brain independently of age-related changes in cognition, the cognitive psychology of aging investigated the effects of aging on cognition independently of age related changes in the brain. Because an increasing number of studies have focused on the relationships between cognitive aging and cerebral aging, these two disciplines have begun to interact. This rapidly growing body of research has come to constitute a new discipline: cognitive neuroscience of aging. The goal of this book is to introduce this new discipline at a level that is useful to both professionals and students in cognitive neuroscience, cognitive psychology, neuroscience, neuropsychology, neurology, and related areas. The book is divided into four main sections. The first section describes noninvasive measures of cerebral aging, including structural (e.g., volumetric MRI), chemical, (e.g., dopamine PET), electrophysiological (e.g., ERP's), and hemodynamic measures (e.g. fMRI), and discusses how they can be linked to behavioral measures of cognitive aging. The second section reviews evidence for the effects of aging on neural activity during different cognitive functions, including perception and attention, use of imagery, working memory, long-term memory, and prospective memory. The third section focuses on clinical and applied concerns, such as the distinction between health aging and aging with Alzheimer's disease, and the use of cognitive training to ameliorate age-related cognitive decline. The final section describes theories that relate cognitive and cerebral aging, including models accounting for functional neuroimaging evidence and models supported by computer simulations. Taken together, the chapters in this volume provide the first unified and comprehensive overview of the new discipline of cognitive neuroscience of aging.

See all Editorial Reviews
Product Details

* Format: Kindle Edition
* Print Length: 408 pages
* Publisher: Oxford University Press, USA; 1 edition (October 22, 2004)
* Sold by: Amazon Digital Services
* Language: English

Buy Cognitive Neuroscience of aging and other Neuroscience related Kindle Books

How the Brain Invents the Mind

Opening Remarks - How the Brain Invents the Mind
A lecture by Dr. Susan Hockfield Rebecca Saxe Ph.D. '03 June 6, 2009;
Running Time: 1:39:22

About the Lecture
In trying financial times, Susan Hockfield remains optimistic and committed to pursuing MIT’s massive, multi-year initiatives in energy and life sciences. She prefaces her “whirlwind” tour of MIT for an alumni audience by referencing the campus-wide relief at the change in presidential administrations, which promises to make science and engineering more central, and to make “MIT values more mainstream.” If it indeed becomes “cool to be smart,” Hockfield believes MIT can count on taking a prominent national role in research, policy and education.

One key area in which MIT hopes to make a major contribution is sustainable energy. The MIT Energy Initiative, two years old, brings together faculty and students across all disciplines to develop a portfolio of new technologies (although the focus seems increasingly to fall on solar). Campus interest is so intense that the Institute has committed to a minor in energy, and it’s seeking five new professorships in the area. The other major enterprise involves fusing biological sciences with engineering, especially in the study of cancer. At the new Koch Institute, cancer biologists and engineers have already made “fundamental discoveries underlying new targeted cancer drugs,” and they are hard at work decoding the disease, and devising new methods for diagnosis and treatment.

Hockfield also candidly describes the impact of the economic downturn on the Institute, acknowledging that “most revenue streams have been compromised,” except for research. With the endowment down by 20-25%, departments across the board are making significant but strategic cuts for the next two to three years. MIT will not compromise on providing financial aid to needy students, a cost that understandably has risen in the past year, nor on hiring faculty. Hockfield hopes that private philanthropy will help MIT “preserve core strengths and values.” At the end of the recession, she says, “We want to come out with a leaner, stronger Institute.”

Fellow neuroscientist Rebecca Saxe outlines her research investigating the neural basis for a Theory of Mind -- how the human mind seems geared to “glean what others are thinking and feeling.” From her work with children and adults, Saxe has determined that there’s a very specific region of the brain -- the right temporal-parietal junction -- dedicated to thinking about how others think. This area lights up in the fMRI scanner when people read stories involving another person’s beliefs and moral judgments, but not when they digest other kinds of written material. The RTPJ develops this special function slowly (young children don’t have it), and Saxe has discovered that she can interfere with this region’s activities, altering her subjects’ sense of what constitutes morally permissible behavior. She’s exploring whether these distinct neural networks develop differently in children with autism, with the hope of finding therapies that might someday help treat the disorder.

Watch free full length Movies on Wellworthwatching

wellworthwatching movies

The Autistic Neuron

The Autistic Neuron A Lecture by Mark Bear; May 4, 2009
Running Time: 0:35:11
About the Lecture

This self-described “basic neuroscientist” confesses he never thought he’d give a talk on autism, but as Mark Bear recounts, decades of research in the basics are now paying off with important insights into the etiology and treatment of brain disorders, including autism.

Bear provides a primer on this developmental disorder, noting that its roots are biological, it is highly heritable, and astonishingly prevalent: one in 150 people express some of the symptoms of autism. These fall on a spectrum, from severely reduced social behavior, abnormal language, repetitive movements, seizures and mental retardation, to the milder Asperger’s Syndrome, where individuals are often academically successful, but socially awkward. Particularly significant to Bear: Autism’s underlying genetic changes manifest themselves in problematic communication between neurons.

To unravel autism, researchers are examining its clinical heterogeneity, “genetic risk architecture,” and how it alters brain connections and function. One of the difficulties in approaching autism is that a variety of genetic mutations can result in autistic behaviors, and only a few of these mutations have been identified. Bear himself has been probing the single gene disorder, Fragile X syndrome (responsible for about 5% of the cases “of full-blown autism.”) In Fragile X, the FMR1 gene is silenced, leading to a missing protein that serves as a key regulator of brain proteins involved in neuron communication. Without FMR1, “the brakes are missing,” and there’s excessive protein synthesis leading to altered brain function.

Bear hypothesized that it might be possible to correct Fragile X by bringing the system back in balance. He created mice models of the disease, and found that by reducing the number of neurotransmitter receptors that respond to the excessive brain proteins, he could ameliorate or correct Fragile X defects. These receptors are “druggable targets,” and, says Bear, “if the treatment works in fly, fish or mouse, it better work in humans or Darwin was wrong.”

Based on this work, drug companies are devising compounds to test in human clinical trials of Fragile X syndrome. In addition, Bear notes, colleagues have discovered that other mutations connected with autism also involve protein regulation problems. “This gets us excited, because it looks like a common pathway that causes synaptic dysfunction in different diseases that may ultimately manifest as autism. If that’s the case, then treatment for the disorder may be efficacious in multiple disorders.”


How the Brain Encodes Reward

How the Brain Encodes Reward - a lecture by Okihide Hikosaka
May 7, 2009 - Running Time: 0:51:02

About the Lecture
As Ann Graybiel puts it, “basal ganglia were dark basement structures” until Okihide Hikosaka began his classic 1980s research demonstrating how these neuronal clusters influenced eye movements. Hikosaka has deepened and broadened his work in this once neglected area of the brain, and brings a McGovern audience up to date on his latest discoveries.

Hikosaka briefly sketches what is known about the basic pathways leading in, around and out of the basal ganglia, circuits that have been associated with stress, pain, mood, memory and arousal. This specialized cluster of neurons seems especially attuned to the neurotransmitter dopamine, and Hikosaka has been investigating “a number of unsolved questions,” including how dopamine neurons form circuits for movement control, whether such neurons encode “motivational values,” and what other parts of the brain guide them.

Hikosaka describes research demonstrating that certain dopamine neurons become excited if a visual cue indicates a future reward, and become inhibited with a visual cue indicating no reward. Dopamine also increases after an action delivers a reward and decreases when an action produces no reward. Research began to explore whether dopamine neurons “encode motivational values, including reward and punishment.” After others’ studies yielded contradictory or uncertain conclusions, Hikosaka designed a set of studies on monkeys involving classical Pavlovian conditioning, with juice rewards and air puffs as aversive stimuli.

Among Hikosaka’s findings: some dopamine neurons were excited primarily by positive, reward-predicting stimuli, others inhibited by air puff-predicting stimuli. But he also found another group of dopamine neurons excited both by positive and negative reward-predicting stimuli (as well as the stimuli themselves). Hikosaka posited two types of neurons that react in very different ways to motivational signals, which he described as value-coding and salience-coding. He also determined that the lateral habenula, a part of the brain sitting at one end of the thalamus, seems to regulate dopamine pathways involved in some motivational responses. By sending a weak electric pulse through the lateral habenula, Hikosaka saw a very strong inhibition of the dopamine neurons that “encode mostly motivational values.”

Watch free full length Movies on Wellworthwatching

wellworthwatching movies

Mission Control Operations

Christopher C. Kraft Jr. November 8, 2005
Running Time: 2:00:32

About the Lecture
Chris Kraft manages to present in a single event the ultimate in engineering case studies, as well as an insider’s history of 20th century space missions and a pep talk for Aero-Astro students. This blunt raconteur describes the challenges of the earliest space pioneers. His story begins with Project Mercury in the 1950s, whose space task group of 35 included eight secretaries. “We were capable people but didn’t know a damn thing about how to fly in space,” recalls Kraft. How would they communicate with a man in orbit, or assess his health? Most doctors thought when an astronaut left earth’s atmosphere, “he’d be a blithering idiot.” Air to ground communication in those days consisted of 20 words of teletype. “How do you make real time decisions in those circumstances?” muses Kraft. He proudly describes assembling the Mission Rules book, “probably the smartest thing we ever did,” which attempted to address all conceivable malfunctions on a space mission. This was an early example of systems engineering, says Kraft.

When President Kennedy challenged NASA to get a man on the moon by the end of the 1960s, “Chris Kraft did not know how to determine orbital mechanics from 30 seconds of radar at Cape Canaveral. I thought the president was a little daft.” Suddenly, there were a whole new set of problems, such as how to make sure a craft aimed at the moon did not just hit it. In the Gemini and then Apollo programs, Kraft’s team solved innumerable and breathtakingly difficult issues. “We did a lot of things by the seat of our pants because we didn’t know any other way. We did it by feel, by having seen the past and doing things the right way.”

Kraft has some harsh words for the current state of space exploration. He can’t countenance NASA’s abandoning the space shuttle. “We seem to have a great propensity in this country for building something wonderful, great and high performance and throwing it away….Golly, my mother would have gone bananas!” He believes that NASA could have made the shuttle much more efficient to fly, and used it as a key element in the new race back to the Moon and to Mars. Kraft doesn’t believe this program will get off the ground—mainly because NASA hasn’t built anything new in 25 years, “and they’ve forgotten what it takes to do it.” The next space mission, whatever it turns out to be, will depend on the current crop of young aerospace engineers. “Go do it, don’t be frightened to fail,” exhorts Kraft. “You learn more from your failures than from your successes.”


Robotics in Space Exploration

Robotics in Space Exploration

About the Lecture
As eager as he is to invent robots that can travel to a moon of Saturn or Jupiter, and function autonomously in these hostile environments, Rodney Brooks would love a shot to explore space himself. “I made an offer to Jeff Bezos, Larry Page and Sergei Brin that if they would fund a one-way mission to Mars, I’d go on it,” says Brooks. But he knows that robots are cheaper to send than us, “big bags of skin with biological processes requiring replenishment of all sorts.” Under the Bush Administration, NASA first laid out an ambitious program in robotic technology, involving sending machines to reconnoiter the moon and Mars and prepare habitation sites for humans. “Robots would dig channels, then lower habitation modules into them, and when people come, they’d live like moles underground,” says Brooks. But why send people at all if these robots can accomplish so much? It turns out that there’s a dangerously long lag time between sending a command to a robot and having the machine perform a function. Ultimately, human senses and timing will be needed on site.

But now NASA’s grand robotic research plans are on hold, says Brooks, blocked by the difficulties and enormous expense of designing a new launch vehicle. The future of sophisticated robotic work seems earthbound, says Brooks. First, there are military innovations -- Congress has mandated that by 2015, 1/3rd of all US military missions should be unmanned. Also, the oil industry is pushing for machine-based solutions to such gritty problems as deep-ocean drilling and oil-well maintenance. And don’t forget the new billionaire space cowboys, who dream of mining platinum fields on asteroids (for fuel cells on earth), or building space tourism businesses. But, Brooks reminds us, we have a way to go: After 40 years of research, “the generic object recognition that a two-year-old child could do, we can’t do with our robots.”


Future Human Space Exploration

Developing the Hardware for Future Human Space Exploration

About the Lecture
While Michael Griffin sees a wealth of reasons for space exploration in general and returning to the moon in particular, NASA must still manage on a tiny portion of “the national treasure.” This 7/10th of a percent of the national budget – the equivalent of each American paying 15 cents every day – “is not an expenditure we should do without,” Griffin asserts. We are driven to investigate beyond earth because curiosity and the desire to master new territory are “wired into our DNA.” But Griffin finds great value in the “opportunity for benign cooperative American leadership.” Space exploration strengthens the nation, society and the human species, he says.

Developing a foothold on the moon will afford humans experience in operating away from earth’s environment, helping to develop the technology needed for opening the space frontier -- practice for Mars and beyond. Griffin provides details on emerging models for a new crew exploration vehicle and booster rockets. NASA is attempting to take advantage of earlier designs for the sake of economy and speed – “architecture with as little fuss and bother as possible, maximizing the use of things we already own.” There will be plenty of commercial opportunities in these public missions, with NASA seeking to purchase launch and communication services as soon as available. And he envisions promoting international cooperation by offering seats in the lunar lander in exchange, in one example, for help in setting up a lunar habitat. “We don’t want to return to the days where NASA does everything,” says Griffin.


The Columbia Tragedy

The Columbia Tragedy: System Level Issues for Engineering - Sheila Widnall '60, SM '61, ScD '64
November 4, 2003. Running Time: 1:14:09

About the Lecture
Among the “tragedy of errors” that doomed the space shuttle Columbia, perhaps the most damning were NASA’s organizational blunders. Sheila Widnall served on the board investigating Columbia’s destruction in February, 2003, and she can describe the technical failures that led, moment by moment, to the ghastly trail of debris across the western United States. But the investigation board traced the roots of this disaster to NASA’s “culture of invincibility,” years in the making. Well-intentioned people, Widnall states, became desensitized to deviations from the norm. NASA managers treated repeated anomalies -- such as foam smashing into shuttle tiles on take off -- as “maintenance turnaround events.”

Foam striking protective tiles on the leading edge of Columbia’s wing led to the horrors of re-entry: gases in excess of 5000 degrees F entered through a possibly 10-inch-wide breech in the wing, melting sensors and internal structure, sending the shuttle out of control. The failures that led to this moment, are both engineering system failures, and human communication failures.

Widnall and the investigation board recommend independent safety oversight for shuttle flights; NASA leadership that heeds minority points of view and doesn’t let scheduling or budget pressures define space missions; and routine inclusion of engineers who have the right to address both technological and operational issues of a flight.

For a recent article on the Columbia tragedy by William Langewiesche in The Atlantic Monthly, go to Columbia's Last Flight: The Inside Story of the Investigation—and the Catastrophe it Laid Bare


The Mysterious Field of Engineering Systems

Norman Augustine June 16, 2009 Running Time: 0:51:40

About the Lecture
One of the nation’s revered technology leaders dispenses anecdotes and wisdom on the slippery subject of engineering systems (or systems engineering). Norm Augustine just can’t get a handle on the discipline: “No one agrees on what it is, or what it does.” After years in industries like Lockheed Martin, Augustine has come up with “Norm’s Rules,” and can at least define ‘system’ as “having two or more elements that interact,” and ‘engineering’ as “creating the means for performing useful functions.” But these definitions don’t get you too far in the real world.

Augustine shows a fuel control system, which some engineers might view as part of a propulsion system. In turn, aeronautical engineers might think of the entire airplane as a system, and transport engineers view aircraft as merely components in systems incorporating airports, highways, shipping lanes. Augustine continues up the ladder until “our system that started as a fuel controller…seems to have the whole universe as a system.” Like Russian Matryoshka dolls, systems can always be embedded within larger systems. Even if you try to simplify a system in terms of just a few objects with a binary, on-off interaction, things can get complex very quickly. Five elements in a system can exist in more than a million possible states. Says Augustine, “A typical earth satellite has nearly one million parts; a 747 over 5 million. How does that make you feel about flying?”

Distinguishing the significant interactions and the important external influences on a system are central to designing and problem solving. And these days, engineers must include politics, public policy and economics as part of their systems. “The trick is to bound the scope of the system so it’s not too large to be analyzed and not too small to be representative.” Doing this right is “why systems engineers should be paid so much.”

Augustine concludes with his “Dirty Dozen” systems engineering traps, which have led to embarrassing bust-ups, monumental failures, and real tragedies. Notable among these: “the ubiquitous interface,” (or absence thereof). He describes how two flight control groups used different metric units and accidentally sent a Mars-bound spacecraft whizzing off into deep space. There’s the “single-point failure,” exemplified by the collapse of a football field-sized satellite dish due to a poorly designed bracket. There’s software, “which like entropy, always increases:” a Mariner spacecraft headed in the wrong direction due to a missing hyphen in 100 thousand lines of code. The problem with most systems ultimately is that they “contain human elements … and humans sometimes do irrational things.

Change Your Mind: Memory and Disease

From Mitworld Educational Videos

About the Lecture
How do we distinguish our friends from foes? How does dementia destroy memory? And how can past experience invade the present with destructive force? Scientists are closing in on the biochemical roots of these neurological puzzles.

Thomas Insel describes the profound impact of a small group of neuropeptides on social behavior in animals, from worms to humans. Oxytocin, the hormone which turns on maternal behavior and cognition, turns out to play a large role in determining social memories. Mice whose genes for producing oxytocin are knocked out can’t seem to remember animals they’ve met 30 minutes earlier – what Insel describes as “dense social amnesia.” An area of the brain’s amygdala is particularly rich in oxytocin receptors, and when the peptide is injected into a nearby ventricle, the animals’ social interactions revert more closely to normal behavior. Oxytocin is a useful tool for interrogating the circuitry that enables humans to determine “who’s important to me, who I’d die for, who I’m pair-bonded with, who will take care of me,” says Insel.

Alzheimer’s Disease (AD), which afflicts 20 million people worldwide, begins by literally clogging and tangling the hippocampus, the part of the brain essential for learning and memory. Li-Huei Tsai and other researchers have found “compelling evidence” that a small protein may be critically important in activating AD’s awful atrophy of memory. By manipulating specific enzymes, Tsai has managed to model in animals “all the pathological hallmarks of Alzheimer’s Disease,” and zero in on the source of the plaques and tangles seen in human Alzheimer’s patients. Tsai foresees drug interventions that inhibit these enzymes. But, she says, a big task remains “even after we’re successful in halting a deleterious process--how can we restore learning and retrieve lost memory in AD patients?”

Why is it that only some people exposed to a shocking event develop post-traumatic stress disorder (PTSD)? Kerry Ressler’s research posits that some kind of learning must take place in the brain’s amygdala -- its fear response center—that cannot readily be extinguished. Researchers have tracked down a molecular factor that increases “after learning of fear or extinction of fear.” He believes that if this molecule is somehow blocked from doing its job, then someone suffering from PTSD cannot extinguish fear. In a fortuitous medical convergence, the drug D-cycloserine, which has been approved for years to treat tuberculosis, proves very effective in enhancing the effects of the molecule, and reducing fear of all kinds. One example: When people with fear of heights were given D-cycloserine as they took rides in elevators, they reported a significant, long-lasting reduction in their phobias.


How Does your Memory Work

A BBC Horizon Documentary

Watch Video - How Does your Memory Work

Aired: March 25, 2008 on BBC2. You might think that your memory is there to help you remember facts, such as birthdays or shopping lists. If so, you would be very wrong. The ability to travel back in time in your mind is, perhaps, your most remarkable ability, and develops over your lifespan. Horizon takes viewers on an extraordinary journey into the human memory. From the woman who is having her most traumatic memories wiped by a pill, to the man with no memory, this film reveals how these remarkable human stories are transforming our understanding of this unique human ability. The findings reveal the startling truth that everyone is little more than their own memory.

Monday, September 7, 2009

Colonising Space

The Universe - Colonising Space; A History Channel documentary film about the imminent colonisation of space, and the possibilities and difficulties that lie before us in our endeavour to spread out beyond our plant's biosphere and into the solar system, and perhaps even beyond.

Carl Sagan always maintained that we should be a "two planet species" - That we should colonize one planet and also have the earthm because he said the earth is in the middle of a shooting gallery and that bit is too risky to put all of our species on one planet.
The highest probabilities for our first destination as a colony planet is Mars...

Watch Documentary - The Universe - Colonising Space

Saturday, September 5, 2009

Science and Islam

BBC Science Documentary with Jim Al Khalili
Physicist Jim Al-Khalili travels through Syria, Iran, Tunisia and Spain to tell the story of the great leap in scientific knowledge that took place in the Islamic world between the 8th and 14th centuries.

Its legacy is tangible, with terms like algebra, algorithm and alkali all being Arabic in origin and at the very heart of modern science – there would be no modern mathematics or physics without algebra, no computers without algorithms and no chemistry without alkalis.
For Baghdad-born Al-Khalili this is also a personal journey and on his travels he uncovers a diverse and outward-looking culture, fascinated by learning and obsessed with science.

From the great mathematician Al-Khwarizmi, who did much to establish the mathematical tradition we now know as algebra, to Ibn Sina, a pioneer of early medicine whose Canon of Medicine was still in use as recently as the 19th century, he pieces together a remarkable story of the often-overlooked achievements of the early medieval Islamic scientists.
Watch Documentary - Science and Islam

The Illusion of Reality

Quantum Physics explained
In the last in his Documentary series Professor Jim Al-Khalili explores how studying the atom forced us to rethink the nature of reality itself. He discovers that there might be parallel universes in which different versions of us exist, finds out that empty space isn't empty at all, and investigates the differences in our perception of the world in the universe and the reality.
About the Maker of this Documentary
Jim Al-Khalili OBE (born 20 September 1962) is a British theoretical nuclear physicist, academic, author and broadcaster. Born in Baghdad in 1962 to an Iraqi father and English mother, Professor Al-Khalili studied physics at the University of Surrey. He graduated with a B.Sc. in 1986 and stayed on to pursue a Ph.D. in nuclear reaction theory, which he obtained in 1989. In that year he was awarded a Science and Engineering Research Council (SERC) postdoctoral fellowship at University College London.

He returned to Surrey in 1991, first as a research assistant then lecturer. In 1994, Al-Khalili was awarded an Engineering and Physical Sciences Research Council (EPSRC) Advanced Research Fellowship for five years, during which time he established himself as a leading expert on the structure of neutron halo nuclei (atomic nuclei exhibiting the unusual feature of having one or two loosely bound neutrons orbiting the rest of the nucleus). He has published widely in his field. He currently holds an EPSRC Senior Media Fellowship. As a broadcaster, Jim Al-Khalili appears regularly on television and radio and writes regular articles for the British press. On television he is presenter of the BBC4 three part series Science and Islam about the leap in scientific knowledge that took place in the Islamic world between the 8th and 14th centuries.

Top Documentaries - Programmes and Films

Our earth is an extraordinary planet, not just because of the almost infinite variety of life that is supported on it, but also because its very fabric, the surface of the earth itself, is so great in variation. In this program, a global journey in search of the greatest natural wonders of the world is undertake

Top Documentaries - Programmes and Films - Great Natural Wonders of the World

This is such a wonderful video documentary - it will fascinate you and scintillate your senses with the visual feast of imagery. In any case, there are no documentaries from Sir David Attenborough which are not top quality, enriched by his obvious love of Natural History and his enthusiasm for the subject - you can see it in his face and hear it in his voice, which is one of the most individual and soothing narrator voices in television.

The Six Billion Dollar Experiment

BBC Horizon Documentary - The Six Billion Dollar Experiment

If the video doesnt work then you can watch it on the following links;
Horizon DocumentaryThe Large Hadron Collider (LHC) is the world's largest and highest-energy particle accelerator, intended to collide opposing particle beams, of either protons at an energy of 7 TeV per particle, or lead nuclei at an energy of 574 TeV per nucleus. The Large Hadron Collider was built by the European Organization for Nuclear Research (CERN) with the intention of testing various predictions of high-energy physics, including the existence of the hypothesized Higgs boson and of the large family of new particles predicted by supersymmetry. It lies in a tunnel 27 kilometres (17 mi) in circumference, as much as 175 metres (570 ft) beneath the Franco-Swiss border near Geneva, Switzerland. It is funded by and built in collaboration with over 10,000 scientists and engineers from over 100 countries as well as hundreds of universities and laboratories.

On 10 September 2008, the proton beams were successfully circulated in the main ring of the LHC for the first time.On 19 September 2008, the operations were halted due to a serious fault between two superconducting bending magnets. Due to the time required to repair the resulting damage and to add additional safety features, the LHC is scheduled to be operational in mid-November 2009.

It is anticipated that the collider will either demonstrate or rule out the existence of the elusive Higgs boson, the last unobserved particle among those predicted by the Standard Model. Experimentally verifying the existence of the Higgs boson would shed light on the mechanism of electroweak symmetry breaking, through which the particles of the Standard Model are thought to acquire their mass. In addition to the Higgs boson, new particles predicted by possible extensions of the Standard Model might be produced at the LHC. More generally, physicists hope that the LHC will help answer key questions such as:

* Is the Higgs mechanism for generating elementary particle masses in the Standard Model indeed realised in nature? If so, how many Higgs bosons are there, and what are their masses?
* Are electromagnetism, the strong nuclear force and the weak nuclear force just different manifestations of a single unified force, as predicted by various Grand Unification Theories?
* Why is gravity so many orders of magnitude weaker than the other three fundamental forces? See also Hierarchy problem.
* Is Supersymmetry realised in nature, implying that the known Standard Model particles have supersymmetric partners?
* Are there additional sources of quark flavour violation beyond those already predicted within the Standard Model?
* Why are there apparent violations of the symmetry between matter and antimatter? See also CP-violation.
* What is the nature of dark matter and dark energy?
* Are there extra dimensions, as predicted by various models inspired by string theory, and can we detect them?

Of the discoveries the LHC might make, the possibility of the discovery of the Higgs particle and supersymmetric partners have been keenly awaited by physicists for over 30 years, although neither of these can be considered certainties. Of the Higgs Stephen Hawking said in a BBC interview that "I think it will be much more exciting if we don't find the Higgs. That will show something is wrong, and we need to think again. I have a bet of one hundred dollars that we won't find the Higgs." Of supersymmetry it has been said "If the LHC does find supersymmetry, this would be one of the greatest achievements in the history of theoretical physics", which Hawking says "would be a key confirmation of string theory" and adds that "Whatever the LHC finds, or fails to find, the results will tell us a lot about the structure of the universe."

The expectation that the Higgs boson will be discovered at the LHC is reinforced by the impressive agreement between the precise measurements of particle processes at the LEP and the Tevatron and the predictions of the Standard Model (formulated under the assumption that the Higgs boson exists). Moreover, there are strong theoretical reasons leading physicists to expect that the LHC will discover new phenomena beyond those predicted by the Standard Model. Referring to the so-called hierarchy problem, namely the fact that the Higgs boson mass is subject to quantum corrections which - barring extremely precise cancellations - would make it so large as to undermine the internal consistency of the Standard Model, Chris Quigg writes: "Physicists have learned to be suspicious of immensely precise cancellations that are not mandated by deeper principles. Accordingly, in common with many of my colleagues, I think it highly likely that both the Higgs boson and other new phenomena will be found with the LHC." He then goes on presenting supersymmetry as a leading candidate for physics beyond the Standard Model, together with composite-Higgs models and large extra dimensions.

Ion collider

The LHC physics program is mainly based on proton–proton collisions. However, shorter running periods, typically one month per year, with heavy-ion collisions are included in the program. While lighter ions are considered as well, the baseline scheme deals with lead ions (see A Large Ion Collider Experiment). This will allow an advancement in the experimental program currently in progress at the Relativistic Heavy Ion Collider (RHIC). The aim of the heavy-ion program is to provide a window on a state of matter known as Quark–gluon plasma, which characterized the early stage of the life of the Universe.
Wikipedia Link

War on Science

BBC Horizon Documentary - A War on Science(Evolution vs Intelligent Design)
Horizon DocumentaryOne of Science's greatest theories; Evolution, is under attack. The threat is emerging from the only scientific superpower on earth, provoking some of the biggest names in science to hit back...
The controversy surrounds a new explanation for the diversity found on planet earth, for many, it threatens to replace science with God.
This is the story of a battle between faith and Knowledge, and is a defining moment in the scientific landscape of of the 21st Century.

Watch Documentary - The War on Science

Related Links;
BBC Science and Nature - Horizon

Super String Theory

BBC Horizon's Super-String Theory - M-Theory Origin of the Universe

Horizon DocumentaryInsightful Scientific documentary about the Grand Unification Theory for the universe and its implications on our understanding of the universe's nature - String theory?.. Membrane Theory!
In theoretical physics, M-theory is an extension of string theory in which 11 dimensions are identified. Because the dimensionality exceeds the dimensionality of five superstring theories in 10 dimensions, it is believed that the 11-dimensional theory unifies all string theories (and supersedes them). Though a full description of the theory is not yet known, the low-entropy dynamics are known to be supergravity interacting with 2- and 5-dimensional membranes.

This idea is the unique supersymmetric theory in eleven dimensions, with its low-entropy matter content and interactions fully determined, and can be obtained as the strong coupling limit of type IIA string theory because a new dimension of space emerges as the coupling constant increases.

Drawing on the work of a number of string theorists (including Ashoke Sen, Chris Hull, Paul Townsend, Ben Freeling, Michael Duff and John Schwarz), Edward Witten of the Institute for Advanced Study suggested its existence at a conference at USC in 1995, and used M-theory to explain a number of previously observed dualities, sparking a flurry of new research in string theory called the second superstring revolution.

In the early 1990s, it was shown that the various superstring theories were related by dualities, which allow physicists to relate the description of an object in one super string theory to the description of a different object in another super string theory. These relationships imply that each of the super string theories is a different aspect of a single underlying theory, proposed by Witten, and named "M-theory".

Originally the letter M in M-theory was taken from membrane, a construct designed to generalize the strings of string theory. However, as Witten was more skeptical about membranes than his colleagues, he opted for "M-theory" rather than "Membrane theory". Witten has since stated that the interpretation of the M can be a matter of taste for the user of the word "M-theory".

M-theory is not yet complete; however it can be applied in many situations (usually by exploiting string theoretic dualities). The theory of electromagnetism was also in such a state in the mid-19th century; there were separate theories for electricity and magnetism and, although they were known to be related, the exact relationship was not clear until James Clerk Maxwell published his equations, in his 1864 paper A Dynamical Theory of the Electromagnetic Field. Witten has suggested that a general formulation of M-theory will probably require the development of new mathematical language. However, some scientists have questioned the tangible successes of M-theory given its current incompleteness, and limited predictive power, even after so many years of intense research.

In late 2007, Bagger, Lambert and Gustavsson set off renewed interest in M-theory with the discovery of a candidate Lagrangian description of coincident M2-branes, based on a non-associative generalization of Lie Algebra, Nambu 3-algebra or Filippov 3-algebra. Practitioners hope the Bagger-Lambert-Gustavsson action (BLG action) will provide the long-sought microscopic description of M-theory.

Watch Documentary - Super-String Theory - M-Theory Origin of the Universe

Uncertain Principles

"Uncertain Principles" - this documentary features the miscellaneous ramblings of a physicist at a small liberal arts college. Physics, politics, pop culture, and occasional conversations with his dog.

Werner Heisenberg formulated the uncertainty principle in Niels Bohr's institute at Copenhagen, while working on the mathematical foundations of quantum mechanics.

In 1925, following pioneering work with Hendrik Kramers, Heisenberg developed matrix mechanics, which replaced the ad-hoc old quantum theory with modern quantum mechanics. The central assumption was that the classical motion was not precise at the quantum level, and electrons in an atom did not travel on sharply defined orbits. Rather, the motion was smeared out in a strange way: the time Fourier transform only involving those frequencies that could be seen in quantum jumps.

Heisenberg's paper did not admit any unobservable quantities like the exact position of the electron in an orbit at any time; he only allowed the theorist to talk about the Fourier components of the motion. Since the Fourier components were not defined at the classical frequencies, they could not be used to construct an exact trajectory, so that the formalism could not answer certain overly precise questions about where the electron was or how fast it was going.

Watch Documentary; Uncertain Principles

About the Uncertainty Principle
In quantum mechanics, the Heisenberg uncertainty principle states that certain pairs of physical properties, like position and momentum, cannot both be known to arbitrary precision. That is, the more precisely one property is known, the less precisely the other can be known. It is impossible to measure simultaneously both position and velocity of a microscopic particle with any degree of accuracy or certainty. This is not only a statement about the limitations of a researcher's ability to measure particular quantities of a system, but once the wave-nature of matter is accepted, the general properties of waves cause the uncertainty principle to be a statement about the nature of the system itself.

In quantum mechanics, a particle is described by a wave. The position is where the wave is concentrated and the momentum is determined by the wavelength. The position is uncertain to the degree that the wave is spread out, and the momentum is uncertain to the degree that the wavelength is ill-defined.

The only kind of wave with a definite position is concentrated at one point, and such a wave has an indefinite wavelength. Conversely, the only kind of wave with a definite wavelength is an infinite regular periodic oscillation over all space, which has no definite position. So in quantum mechanics, there are no states that describe a particle with both a definite position and a definite momentum. The more precise the position, the less precise the momentum.

The uncertainty principle can be restated in terms of measurements, which involves collapse of the wavefunction. When the position is measured, the wavefunction collapses to a narrow bump near the measured value, and the momentum wavefunction becomes spread out. The particle's momentum is left uncertain by an amount inversely proportional to the accuracy of the position measurement. The amount of left-over uncertainty can never be reduced below the limit set by the uncertainty principle, no matter what the measurement process.

This means that the uncertainty principle is related to the observer effect, with which it is often conflated. The uncertainty principle sets a lower limit to how small the momentum disturbance in an accurate position experiment can be, and vice versa for momentum experiments.

Wikipedia Link
Hyperphysics - the uncertainty Principle