Why would I not want a flying car?

Previous in series: Why would I want a flying car? There have been many reasons urged against the concept of flying cars; let’s take stock of them here: They are impractical (and thus time spent on the concept is wasted) They would be noisy or unsightly They would be dangerous, to the occupants or to… Continue reading Why would I not want a flying car?

Why would I want a flying car?

Previous in series: Where is my flying car? Let’s consider:  I live in Laporte, PA, and have an office in the Foresight suite in Menlo Park, CA. That’s a distance of about 2800 miles, and I could drive it in about 40 hours, a full working week.  That’s a substantial commute. Of course, I don’t… Continue reading Why would I want a flying car?

Back again

Nanodot appears to be back on the air again. Our outage was an aftereffect of the hack attack we had a few weeks ago. This, and the other lingering effect (de-listing of the main site from Google) are both not actual results of the hacking (which put code in that popped up ad windows) but… Continue reading Back again

Haptics

There’s a nice article over at the Singularity Hub that’s a round-up of currently-available haptics devices.  They seem primarily excited over the prospects of haptics in gaming, but there are two reasons we’re interested in developments. First is simply telerobotics, as in Feynman Path manipulation.  We want the feedback to help develop an intuitive feel… Continue reading Haptics

Machine Ethics / Moral Machines postscript

While we’re on the subject of machine morality, here’s a talk I gave a couple of years ago on the subject.  You can see Wendell Wallach, one of the authors of Moral Machines, ask a question at about minute 27. Ethics for Machines

First machine ethics book

Over at Accelerating Future, Michael Annisimov has a pointer to a review of Moral Machines by Wallach and Allen.  He makes one major factual mistake, though:  MM is not the “first published book exclusively focused on Friendly AI” as he calls it. The first book dealing exclusively with these issues was my Beyond AI, which… Continue reading First machine ethics book

Organic vs machine evolution

A short comment on Drexler’s paper Biological and Nanomechanical Systems: Contrasts in Evolutionary Capacity:  He distinguishes two types of design, O-style (like organic) and M-style (like mechanical) systems.  He points out that O-style systems are much more robust to incremental design modification, where M-style systems require coordinated changes that are much, much less likely to happen… Continue reading Organic vs machine evolution

Site hacked — apologies

Spammers hacked into Foresight.org recently and inserted junk into some of our pages;  we think we’ve gotten rid of it but let us know if you see any more!

Self-replicating machines and risk

Engineering and analysis in the field of SRMs is unusual in many ways.  Eric Drexler has posted a paper about differences in evolutionary capacity in mechanical and biological systems that’s worth a look. Purely coincidentally, we at Foresight have been discussing self-replication in the context of the Feynman Path and I came up with an… Continue reading Self-replicating machines and risk

Learning and AGI

Yesterday I wrote that we don’t have a clue how learning works. If that were as categorically true as I made it sound, the prospects of AGI would be pretty much sunk. AGI requires getting up to the universal level of a learning machine: one that can in theory learn anything any other learning machine… Continue reading Learning and AGI

0
    0
    Your Cart
    Your cart is emptyReturn to Shop