Oh my !

Your answer is telling me that AI is mostly smoke and mirrors at the moment. Or the hype of the day, meant to hopefully drive stock prices upwards. Might be the next big thing, someday, but it's not very good now.

I wouldn't trust it to design anything critical. Why? Because there's no one responsible! What if the AI design fails? Where's the restitution? It's not that AI is bad, but honestly, there's no legal framework for responsibility or liability should anything go awry. Neither government nor the law knows how it should be handled.

So when your AI controlled car runs someone over is it responsible or you? What if it took control from you? Or you were asleep! Who is responsible, the 20YRO whiz kid programmer with no real world experience, the car company, or?

Maybe we ought to think some of these things through before wholesale adoption. I'm not saying to stop it, but I am saying that most of the implications need to be reviewed.

This reflects my biggest concerns at the moment. AI is far from new. Way past infancy. We've hit a degree of it's usse that has caused it to start to be in the public eye, but it wasn't a breakthrough. It's just another iteration of the self learning algorithms. One famous, well known one of those is the YouTube algorithm, that helps you (and each of us, individually, with whatever "preferences" it thinks we have), find our way through that site. That site has enough videos, that if you scrolled your scroll wheel as fast as you could, too fast to even read, you'd never get to the end of the list, because the list of titles grows faster than you can get it to move up the screen. YouTube and Google have no control over that algorithm, except to install a new limit (like "don't show how to eat tide pods"), or to shut it off completely, but It's hard to put it into terms like that, because unlike an anthropomorphized life cycle, AI has no limits to life span or brain capacity. It is self learning, self growing, it writes it's own code. If it runs out of "brain power", it just needs more processors plugged in, more memory plugged in. The law makers (and law enforcers) don't have a lot to work with here, because they do not know what it is. Neither do the people who turned it loose. It literally learns on it's own. It literally writes it's own code. After some time in the wild (it's far from new right now), all the best, brightest, most accomplished computer scientists in the world don't know much more about their program than the 20 year old whiz kid who read about it in a magazine. The only restraints on what AI can do is what is set in it as a hard limit when it's set loose on a project. And that is and can only be based on what can be foreseen at the time.

Right now (with exceptions I'm sure) AI is being used for "finite" tasks, where it's "missioin" is to do a specific thing. What becomes scary is when someone turns one loose that does NOT have a specific goal, and/or it's goal is not one that's desirable. Or it's not precluded from accessing it's own "OFF" switch, which in the end is just more software in the program. Or when it's not restrained from updating it's own mission parameters. The stuff sci-fi movies are made from. We ARE there, and for now, we're relying on programmers and end users of this software to keep there from being any "holes" in the safety part of the programs.

We are not (to my knowledge) having any messes like that right now, but it will come. The more this stuff is available, the more bad actors will use it. It's been five or six years now since I got my first AI phone call. A very poor one. We think we don't get those any more, but what's actually happened is the AI has "learned" how to interact in such a way that it's not recognizable from an actual person. So when you get a "spam" phone call, it might yet be a person, and it might not be. You really can't tell any more. That's why AI calls have "gone away", it's not that they're gone, it's that they're unrecognizable. Due to the countries of origin that are popular for these types of fraud, law enforcement is pretty much out of the loop. Just as it is when it's a human calling to talk you out of your money. I see no reason why that sort of thing has any reason to stop with technology growing to larger and larger potential.

Why would AI need gobs of more power? Is it not just computers and microchips (5 volt power supply in a 15 amp circuit) controlling the same stuff we have now?

Yes, it's just a program running on microchips. What you're missing is the scale at which it's used by the internet giants. This doesn't run on a desktop comouter. Billions of instances of the same program running at the same time. Literally, billions, with a "B". More efficiently than that many individual laptops, but that power is real, and does add up.

That is if it does exist,

That's not even a question. It's an evolution that's been growing geometrically since the 1950's, (60's? not positive), when IBM gave it the name "machine learning". Because of it's gometrical groth pattern (as is typical of computer software), it's advancements that left it's predecessors in the dust, particularly it's having foresight, predictive power, and the power to GENERATE it's own code, rather than "react" with the changing of some available variables, it's been "upgraded" to artificial intelligence, and it's kind of become more well known. It's not new at all. It's capabilities, it's implementations, and it's public awareness are what's new.

.......works properly and accomplishes any of society’s needs. So far all I have seen are dancing robots, who don’t solve any societal needs for me.

Dancing robots are not AI. I mean, AI could be in there, but dancing robots that have learned to balance themselves, react to unexpected outside stimulus (someone trying to knock them over), That's reactions. That's the realm of machine learning. The transition from machine learning to AI is not a "hard line", although there are some distinctive differences, but the advancement from "self learning" to "virtually thinking" is a change that's 30 plus years old. Of course the "visionary" computer folks from the earliest days had "intelligence" in mind, but it took quite some time to get to where it was even on the table to try....
What has changed with AI as of late, with it being more well known and publically visible, is the power of these programs, the scale at which they're being used, and the availability of "off the shelf" programming, to set up your own intelligent computer, sent off to do whatever you want it to do...
 
What's new is the huge ramp up in scale. So how long before the self serving AI consumes all the grid? Cutting off power to consumers and industry it doesn't "like"? You know that after a while it will not be possible to simply pull the power cord. It will be able control the grid to route power as it pleases, back to itself. It will control it's own security and doors, preventing access.

I'm stating that as a crazy event, but as AI grows, it could very well develop some awareness and want to continue it's existence at the expense of others. I don't see that there's any safeguards built in. We know it won't have a moral system, or it may only reflect the creator's. Or worse yet, see we are it's enemy.

All I know is man is quite capable of committing atrocities for whatever reason. I can't believe that AI wouldn't be the same or worse.

I'm not a Luddite, despite the above. I believe in the forward march of progress. I think technology can help us and make our lives better. Only asking for a little bit of evaluation of what it really means to us, before diving into the rabbit hole.

Face it, in our time, there are robots on the battlefield. That was science fiction 20 years ago. Let's not go headfirst towards the path of our enslavement by our robot masters. If AI becomes sentient, what prevents it from controlling assets that protect it at our expense?

The commercial guys want total and unfettered freedom to do what they want. For this technology, I'm not so sure it's a good idea. Your thoughts may differ.
 
ster have been worked out by now? I mean, it's been a while, yeah?
yea, sure, it was a people issue.. people making mistakes... we keep thinking we can do it better, only to make the same error twice, or another error, that they didn't for see. Man thinking they are superior , only to find the humbling fact that it's all BS.
 
People are imperfect. Despite experience with TMI, there's been Chernobyl and Fukushima. Sure the events have been for different reasons but still, it's clear that we're not great at prevention, because that takes a kind of reasoning that's difficult to reconcile with other pressures. On the one hand, there's not been many incidents, but the ones that did happen seemed to get worse. So did we learn? Not sure that we did.
 
I still drive a manual shift truck, heat with firewood. Don’t need that crap. Would consider an AI chainsaw that would load and stack too.
 
What's new is the huge ramp up in scale. So how long before the self serving AI consumes all the grid? Cutting off power to consumers and industry it doesn't "like"? You know that after a while it will not be possible to simply pull the power cord. It will be able control the grid to route power as it pleases, back to itself. It will control it's own security and doors, preventing access.

I'm stating that as a crazy event, but as AI grows, it could very well develop some awareness and want to continue it's existence at the expense of others. I don't see that there's any safeguards built in. We know it won't have a moral system, or it may only reflect the creator's. Or worse yet, see we are it's enemy.

All I know is man is quite capable of committing atrocities for whatever reason. I can't believe that AI wouldn't be the same or worse.

I'm not a Luddite, despite the above. I believe in the forward march of progress. I think technology can help us and make our lives better. Only asking for a little bit of evaluation of what it really means to us, before diving into the rabbit hole.

Face it, in our time, there are robots on the battlefield. That was science fiction 20 years ago. Let's not go headfirst towards the path of our enslavement by our robot masters. If AI becomes sentient, what prevents it from controlling assets that protect it at our expense?

The commercial guys want total and unfettered freedom to do what they want. For this technology, I'm not so sure it's a good idea. Your thoughts may differ.
 
What's new is the huge ramp up in scale. So how long before the self serving AI consumes all the grid? Cutting off power to consumers and industry it doesn't "like"? You know that after a while it will not be possible to simply pull the power cord. It will be able control the grid to route power as it pleases, back to itself. It will control it's own security and doors, preventing access.

I'm stating that as a crazy event, but as AI grows, it could very well develop some awareness and want to continue it's existence at the expense of others. I don't see that there's any safeguards built in. We know it won't have a moral system, or it may only reflect the creator's. Or worse yet, see we are it's enemy.

All I know is man is quite capable of committing atrocities for whatever reason. I can't believe that AI wouldn't be the same or worse.

I'm not a Luddite, despite the above. I believe in the forward march of progress. I think technology can help us and make our lives better. Only asking for a little bit of evaluation of what it really means to us, before diving into the rabbit hole.

Face it, in our time, there are robots on the battlefield. That was science fiction 20 years ago. Let's not go headfirst towards the path of our enslavement by our robot masters. If AI becomes sentient, what prevents it from controlling assets that protect it at our expense?

The commercial guys want total and unfettered freedom to do what they want. For this technology, I'm not so sure it's a good idea. Your thoughts may differ.

I'm with you, time moves on, things expand, it's usually for the better. Something that bothers me a lot though, is I've heard a couple of "voluntary science spokespeople" say something along the lines of this- Some of the biggest "software developer" folks out there working on this, they do it because it's the way forward. Just as many say they keep them on the front lines, because the only way to recognize, control, or combat AI is by the use of an AI that was launched with that purpose. It kind of reminds me what a lot of the US based physicists (regardless of their origin) said about building the first nuclear weapons. The idea is out there, the technology is coming quickly, shortly it will be available to most nations, and the only way to control it when it is in the wrong hands is to have a better one in the right hands.

Those early nuclear physicists were spot on. Only time will tell, AI is totally different, but very weaponizable. Will it be for the greater good? Probably. Nuclear physics cracked the puzzle to build the very computer you're reading this on right now. Or smart phone. Or anything with an LED light in it. All in all, all that research went mostly for good. Except for the building it into unfathomable bombs that the world had never seen. I wonder how the AI explosion will go as it cracks the tipping point of being good enough for main stream. They're clearly not the same thing, but I see lots of moral, ethical, and practical similarities between the two. Not the least of which is simple an unbelievable amount of power (different kinds of power) in not many hands.
 
With my agricultural background AI means artificial insemination. Same process done to other animals - us.
 
I'm with you, time moves on, things expand, it's usually for the better. Something that bothers me a lot though, is I've heard a couple of "voluntary science spokespeople" say something along the lines of this- Some of the biggest "software developer" folks out there working on this, they do it because it's the way forward. Just as many say they keep them on the front lines, because the only way to recognize, control, or combat AI is by the use of an AI that was launched with that purpose. It kind of reminds me what a lot of the US based physicists (regardless of their origin) said about building the first nuclear weapons. The idea is out there, the technology is coming quickly, shortly it will be available to most nations, and the only way to control it when it is in the wrong hands is to have a better one in the right hands.

Those early nuclear physicists were spot on. Only time will tell, AI is totally different, but very weaponizable. Will it be for the greater good? Probably. Nuclear physics cracked the puzzle to build the very computer you're reading this on right now. Or smart phone. Or anything with an LED light in it. All in all, all that research went mostly for good. Except for the building it into unfathomable bombs that the world had never seen. I wonder how the AI explosion will go as it cracks the tipping point of being good enough for main stream. They're clearly not the same thing, but I see lots of moral, ethical, and practical similarities between the two. Not the least of which is simple an unbelievable amount of power (different kinds of power) in not many hands.
A great concentration of power or knowledge in the hands of the few, isn't necessarily good, especially a concentration of power.
 
Back
Top