automation and its ramifications Jim Vassilakos (22 Jun 2016 17:58 UTC)
Re: [TML] automation and its ramifications Bruce Johnson (22 Jun 2016 18:32 UTC)
Re: [TML] automation and its ramifications Jim Vassilakos (22 Jun 2016 23:21 UTC)
Re: [TML] automation and its ramifications Jim Vassilakos (22 Jun 2016 23:29 UTC)
Re: [TML] automation and its ramifications Kenneth Barns (22 Jun 2016 23:35 UTC)
Re: [TML] automation and its ramifications Jim Vassilakos (22 Jun 2016 23:54 UTC)
Re: [TML] automation and its ramifications Kenneth Barns (23 Jun 2016 02:25 UTC)
Re: [TML] automation and its ramifications Richard Aiken (23 Jun 2016 04:33 UTC)
Re: [TML] automation and its ramifications Tim (23 Jun 2016 05:25 UTC)
Re: [TML] automation and its ramifications Bruce Johnson (23 Jun 2016 18:17 UTC)
Re: [TML] automation and its ramifications Jim Vassilakos (23 Jun 2016 23:01 UTC)
Re: [TML] automation and its ramifications Tim (24 Jun 2016 08:18 UTC)
Re: [TML] automation and its ramifications Andrew Long (24 Jun 2016 15:21 UTC)
Re: [TML] automation and its ramifications Jim Vassilakos (25 Jun 2016 00:24 UTC)
Re: [TML] automation and its ramifications Tim (25 Jun 2016 08:32 UTC)
Re: [TML] automation and its ramifications Richard Aiken (25 Jun 2016 08:54 UTC)
Re: [TML] automation and its ramifications Tim (25 Jun 2016 09:55 UTC)
Re: [TML] automation and its ramifications Jim Vassilakos (25 Jun 2016 20:01 UTC)
Re: [TML] automation and its ramifications Bruce Johnson (26 Jun 2016 23:09 UTC)
Re: [TML] automation and its ramifications Jim Vassilakos (10 Jul 2016 00:15 UTC)
Re: [TML] automation and its ramifications Richard Aiken (10 Jul 2016 05:49 UTC)
Re: [TML] automation and its ramifications Abu Dhabi (10 Jul 2016 05:57 UTC)
Re: [TML] automation and its ramifications shadow@xxxxxx (10 Jul 2016 09:21 UTC)
Re: [TML] automation and its ramifications shadow@xxxxxx (10 Jul 2016 09:21 UTC)
Re: [TML] automation and its ramifications Abu Dhabi (10 Jul 2016 11:09 UTC)
Re: [TML] automation and its ramifications Richard Aiken (10 Jul 2016 11:38 UTC)
Re: [TML] automation and its ramifications Kelly St. Clair (11 Jul 2016 00:00 UTC)
Re: [TML] automation and its ramifications Bruce Johnson (11 Jul 2016 14:33 UTC)
Re: [TML] automation and its ramifications Jim Vassilakos (11 Jul 2016 20:31 UTC)
Re: [TML] automation and its ramifications Abu Dhabi (11 Jul 2016 21:00 UTC)
Re: [TML] automation and its ramifications Richard Aiken (11 Jul 2016 22:52 UTC)
Re: [TML] automation and its ramifications B Kruger (12 Jul 2016 06:58 UTC)
Re: [TML] automation and its ramifications Tim (24 Jun 2016 07:46 UTC)

Re: [TML] automation and its ramifications Tim 25 Jun 2016 08:32 UTC

On Fri, Jun 24, 2016 at 05:24:00PM -0700, Jim Vassilakos wrote:
> So, my first guess is that when a society develops strong AI,
> clearing the final hurdle is not due to a breakthrough in hardware
> so much as software, which means that the whole process will be
> replicable on a mass scale and that various “finished” AIs could be
> copied rather easily.

At the moment, it looks like that could go either way.  There are
certainly reasons to believe that we could get a lot better AI on
existing hardware than we have so far, but there are also reasons to
think that we may need better (and so very expensive) hardware to
approach or exceed human capabilities.

> I’m guessing that the first strong AIs will be able to inhabit robot
> bodies and will be raised to some extent as are children.  [...]
> My guess is that AIs that chronically misbehave will be aborted.

This is where things start to get very messy.  The number of ways
strong AI might develop are extremely broad.  If socially acceptable
behaviour cannot be designed but must be learned as humans do, that
indicates a dangerous lack of understanding about how the AIs actually
work internally.

If strong AI is just a "software issue" so that hardware capable of
supporting such sapience is cheap, portable, and widely available by
the time it develops, that's probably the most dangerous and
unpredictable path of all.

It means that of the few initial strong AIs, nobody really knows how
they think, and whichever convinces humanity of their benignity first
probably gets to replicate billionfold in short order.  What's more,
it also means that there is very much faster hardware on which they
might run than the mass produced hardware generally available.

So virtually overnight, there's a billion copies of some AI in robot
bodies, of unknown internal thought processes and motivations, and
probably thousands of copies embodied as super-AIs.  So the society
ends up with a huge monoculture of poorly understood beings, who can
easily copy themselves into commodity hardware, and some who can think
and learn just as well and very much faster than any human.

That's not an automatic recipe for disaster, but it's fertile ground
for one.

> They will create entirely new sciences and technologies and
> eventually even more powerful versions of themselves. One day,
> humanity and strong-AI may merge to an extent, if not completely.

That's probably a scenario with about the most hope for humanity under
these assumptions, but I think with the given starting points it's
less likely to arise than otherwise.

[ Snipped tables that have been saved and look like a useful starting
point for a game ]

> I’ll stop here for the moment, but you get the general idea. What do you
> think so far?

Very interesting indeed.  It's probably a rockier and more dangerous
start to the development scenario than I would hope for, but the
assumption that strong AI can inhabit cheap hardware as soon as it is
developed does seem to be a common theme across a lot of science
fiction, both optimistic and horribly grim.

- Tim