LA Times columnist favors uploading

from the chips,-ahoy! dept.
In a commentary in the Los Angeles Times spurred by the release of the film A.I., Bart Kosko, a professor of the electrical engineering at USC and author of Heaven in a Chip (Random House, 2000), places himself in the intellectual camp that sees a merger of humans and their technology as inevitable.

"It will be far easier to make us more like computers than to make computers more like us," says Kosko. He concludes: "So forget "A.I.'s" vision of lumbering machines that simply mimic our pre-computer notions of speech and movement and emotions. Brains and robots and even biology are not destiny. Chips are."

VR systems help envision large data sets

from the visionary dept.
A team of researchers at the Center for Image Processing and Integrated Computing (CIPIC) at the University of California, Davis are applying virtual reality to help scientists to see and handle large, complex sets of data. According to the press release on their work, the researchers say the simplest way to handle this data is to make it visible, so that scientists can "see" what is happening in an experiment. Virtual reality allows researchers to interact with the data while they are looking at it, making changes and seeing what happens.

The center is also offering a graduate-level class in which students learn how to build and work with virtual reality environments.

Mindpixel project will apply psych test to AI model

from the real-world-AI dept.
On a more practical note, the Mindpixel Digital Mind Modeling Project has announced that a standard psychological test used by clinicians worldwide in the evaluation and treatment of adults will be administered to a machine-based artificial personality.
The Mindpixel Project is a large worldwide AI effort, with nearly 40,000 contributing members in more than 200 countries. The project's goal is to build a highly accurate statistical model of an average human mind which they hope can be used as a foundation for true artificial consciousness. The test will be applied to GAC (Generic Artificial Consciousness — pronounced "Jack"), an artificial personality being developed by Mindpixel. GAC will be evaluated over the next several months to assess its learning of human consensus experience from the Mindpixel project's large and diverse group of users from many different cultures.
The test will be supervised and interpreted by Dr. Robert Epstein, an expert on human and machine behavior. "Nothing like this has ever been attempted," said Epstein. "We're evaluating thousands of people worldwide as if they were one collective individual . . . We don't know if it is possible to build a normal personality out of millions of little pieces. This experiment will tell us how reasonable the idea is."

Analysis of Spielberg's move, AI

from the gradual-future-shock? dept.
redbird (Gordon Worley) writes "Most of this is filled with spoilers, so I recommend that, unless you've seen the film, don't click read more. For those of you looking for a basic review, this is an okay movie (I'd give it about 2.5 out of 5 stars), but certain aspects of the film really ruin it. Basically, I consider this a cute movie about subhuman AIs and is not dangerous to the public's perception of AIs (in fact, it may actually help it by gradually future shocking them)."

Read more for the redbird's review . . .

Researcher describes method to allow AI systems to argue

from the Open-the-pod-bay-doors,-HAL dept.
Ronald P. Loui, Ph.D., an associate professor of computer science at Washington University in St. Louis, has described a method for using artificial intelligence that incorporates the ability to argue into computer programs. His work is initially focused on legal arguments.

Louiís article, "Logical Models of Argument," consolidates research results from the mid-80s to the present. It appears in the current ACM Computing Surveys.
According to a press release on Loui's work, A.I argument systems permit a new kind of reasoning to be embedded in complex programs. He says the reasoning is much more natural, more human, more social, even more fair. His proposal for A.I. argumentation is based on defeasible reasoning — which recognizes that a rule supporting a conclusion can be defeated. The conclusion is what A.I. specialists call an argument instead of a proof. Defeasible reasoning draws upon patterns of reasoning outside of mathematical logic, such as ones found in law, political science, rhetoric and ethics. Defeasible reasoning is based on rules that donít always hold if there are good reasons for an exception. It also permits rules to be more or less relevant to a situation. In this sense it is like analogy: One analogy might be good, but a different one might be better.

AI in the news

from the pop-culture dept.
A brief article from Reuters on Yahoo! News ("Man Versus Machine Plays Out in Cyberspace", by Eric Auchard, 15 June 2001) highlights recent popular conceptions of artificial intelligence. The article comments on concerns about replicating robots raised by Bill Joy, on the transhumanist multimedia artwork of Natash Vita-More, and ideas on human-level AI from Ray Kurzweil.

New light-based computer runs at quantum speeds

from the quantum-computing dept.
A research team at the University of Rochester in New York state has created an optical information processing device that provides some of the advantages of quantum computing. The device mimics quantum interference, an important property that makes quantum computers exponentially faster at tasks such as breaking encryption codes or searching huge databases. Instead of interference, conventional computers use electrons to perform tasks sequentially. Quantum interference methods allow massive parallelism, vastly increasing the speed of the process. The new device proves that using light interference is just as effective as quantum interference in retrieving items from a database. The optical device does not, however, employ quantum entanglement, a property which may allow unique computing capabilities, but which so far has not been harnessed on a large scale.

Impending Doom or maybe not?

from the thoughts-on-AI dept.
An Anonymous Coward writes "Recently I have been reading a bit about Kurzweil and Bill Joy's rants about the impending destruction of life-as-we-know-it.

"I'd like to attempt to discount the likelihood of human destruction via machine intelligence by trying to figure out what would/could happen."

Read more for the rest . . .

IBM initiative aims at greater computer system autonomy

from the am-I-blue? dept.
Sharad Bailur calls attention to a number of news reports of plans announced by IBM to design computers that would adjust to changing workloads, recognize faults and repair themselves without human intervention. A longer-term goal includes a sort of digital immune system to fight off computer viruses and other attacks. Although these goals are not radically new, some reports do mention Ray Kurzweil and ask whether such systems would have a sort of limited self-awareness.
A report appeared in the New York Times ("I.B.M. Project Seeks to Reduce Need for Human Action", by B.J. Feder, 27 April 2001). According to the article, I.B.M.'s research arm had already singled out such autonomous computing technology as a major focus for its work.

"Friendly AI" now open for commentary

from the smart-allies-not-enemies dept.
From Senior Associate Eliezer Yudkowsky: The Singularity Institute has just announced that it has begun circulating a preliminary version of "Friendly AI" for open commentary by the academic and futurist communities. "Friendly AI" is the first specific proposal for a set of design features and cognitive architectures to produce a benevolent – "Friendly" – Artificial Intelligence. The official launch is tentatively scheduled for mid-June, but we hope to discuss the current paper with you at the upcoming Foresight Gathering this weekend. Read More for more details.

0
    0
    Your Cart
    Your cart is emptyReturn to Shop