Oh my !

AI has a lot of great and meaningful applications that just aren't exciting enough (to many) to get the publicity. We hear about "casual use" stuff, like being able to restore old photos, or enlarge and improve grainy pictures, but there's so much more.

ChatGPT (you may have heard of it) is a wealth of knowledge on tap. You could ask it how to bore a hole and it would give you an answer.
Here's an example. I asked:


ChatGPT answered:

Note that I limited the response to two paragraphs for the sake of posting here, but I could have asked for 10 pages of detail on precision hole boring and it would produce such. (Or leave the length unspecified).

Beyond that, though impressive, AI is making great strides in medicine. An AI can take a huge amount of data on patients and find correlations that are easily missed by humans. Examples include finding risk factors for certain illnesses that were previously unknown, improved diagnosis, and determining causation - like finding that people in a particular area have a higher incidence of some affliction.

It's a shame that what we hear about is the "toy" stuff.

The same thing happened in the 80's when AI was also a big buzzword. At that time scientists over-promised and under-delivered because they could not achieve the things they said they would, at least in a reasonable time frame. They underestimated the amount of work and processing power involved, so began what computer scientists refer to as "the AI winter". Today though, many of the promises of AI are coming to fruition, largely due to "deep learning" which is a more computationally-intensive form of the "neural networks" that were developed in the 80's - but computers at the time didn't have the power to work with as many nodes ("neurons") as they do today. (There are other AI techniques as well, but deep learning is probably at the root of most of the big breakthroughs.)

GsT
I'm not saying there's no utility to AI, whatever that is. However, even your examples seem a little weak, and don't seem that they are revolutionary, or even greatly enabling. Nearly all of the above was available with simple programming 5 or 10 years ago. I'll admit to advances in neural networks, but I don't see the differentiator between say neural networks, which is already old school, and AI. What does AI do better? So far it doesn't seem to be terribly energy efficient, is it supposed to be more efficient than current methods?

And yeah, there's that fool me once thing... So we heard this same song and dance in the 80's... It was a little tulip craze. Then we heard it in the 90's with the dotcom thing. No need to continue with examples. I being a skeptic at the moment, but honestly, I'd like to know. Not the pie in the sky stuff, the what will it mean to me? Will the cost of drugs go down? Food? Or will companies become more efficient? Or will all our children be put out of their jobs? Yeah it's a heavy thought, but maybe we ought to be thinking about what the "next greatest thing" will bring to society. If it does us all some good, sure. If it's just a scheme to divert taxpayers money to the few, it sounds like a Faustian bargain...
 
Great reply. It will of course, be met with indifference and skepticism by many members on here who are, shall we say, at an age where differentiating between new potentially world-changing breakthrough technology and overhyped fads becomes difficult. :grin:

I don't have to tell you how old I am, because since I remember I couldn't figure that (crap) out.

Of course, it could all go horribly wrong and there are plenty of terrible scenarios where AI is world-changing for all the wrong reasons but there are enough non AI-related ways our species could permanently fekk things up so eh, what's new. :grin:

Someone WILL eventually use it for someting destructive on a world wide scale. In the mean time, it's biggest use and biggest driver is the internet companies who turn it loose to process all the "non personal" and "aggregate data that is "unidentifialbe", and piece it together until there is a clear picture of each internet user, with or without a given name. Just a number serves the same purpose. That's the cash driver that's making it grow so quickly. And of course there ARE good uses for it besides wringing dollars out of internet users, but as of this point (which could change), but as of now, processing our internet activity to "not identify" us but "figure out how to sell stuff to each individual internet user", that's where the money's getting poured into the technology.

As for Three Mile Island, wouldn't the mistakes and problems that caused the original disaster have been worked out by now? I mean, it's been a while, yeah?

One reactor melted down, back in the 1970's. That was the big news. The reactor in question, the one they want to re-fire, that was active for quite a time, and was properly, legally, ethically, and correctly shut down five or six years ago, because of "not enough demand". In my own words, there was too much power on the grid in that area, driving prices down, making the reactor uneconomical. And/or other reactors owned by the same folks became "more economical" by reducing the supply. Not sure where my conspiracy lands on that one. But the bottom line is the reactor that Microsoft wants to re-start is not anything to do with the one that the plant is famous for. Restarting that will be a non-event.
 
Long before the term AI there was another more accurate term “machine learning”. It was commonly used in industry to automate production. The company I worked for used it extensively.

When new products were developed the initial assembly and packaging was done manually. If the product was a success we immediately started automating the process. The company was willing to lose money on the introduction if they thought they could make a profit in the long run.

I remember one product introduced in the early 1990’s. When being produced and packaged manually they lost money on every piece sold. It took 31 people to assemble and package it. As time went on we were able to automate most of the process with machines that incorporated “machine learning”

When the project was completed it only took 8 people to run the production line from raw materials to finished product out the door.

On one hand 22 people who initially built the product were no longer needed. On the other hand 8 people who were previously not needed on the production floor had permanent jobs.

The company currently has 21 lines producing this product so in the long run 168 permanent jobs were created that didn’t exist prior to the products introduction
 
Long before the term AI there was another more accurate term “machine learning”. It was commonly used in industry to automate production. The company I worked for used it extensively.

When new products were developed the initial assembly and packaging was done manually. If the product was a success we immediately started automating the process. The company was willing to lose money on the introduction if they thought they could make a profit in the long run.

I remember one product introduced in the early 1990’s. When being produced and packaged manually they lost money on every piece sold. It took 31 people to assemble and package it. As time went on we were able to automate most of the process with machines that incorporated “machine learning”

When the project was completed it only took 8 people to run the production line from raw materials to finished product out the door.

On one hand 22 people who initially built the product were no longer needed. On the other hand 8 people who were previously not needed on the production floor had permanent jobs.

The company currently has 21 lines producing this product so in the long run 168 permanent jobs were created that didn’t exist prior to the products introduction
Ok, a real example. Thank you. So is AI simply a way to hype up Wall Street to get more money? Dunno, machine learning, I can get behind, but somehow that doesn't seem like a whole power plant is needed for it. If so, it seems terribly inefficient.
 
Ok, a real example. Thank you. So is AI simply a way to hype up Wall Street to get more money? Dunno, machine learning, I can get behind, but somehow that doesn't seem like a whole power plant is needed for it. If so, it seems terribly inefficient.

Machine learning is the forerunner of AI, but it's not a valid comparison. AI is able to do in a milllisecond what machine learning could do in a few minutes, (mostly because of the processor speeds available), but if (and it's not perfect, but if) one said that machine learning had reactive facilities to correct for error, then AI has predictive power to anticipate errors in parts and processes it's never seen the like of before. It takes a LOT of computer power to do things that are quite simple and easy for a human brain. And you've got an "instance" of your AI program working on tens, thousands, millions, billions of "problems" at a time (again, more so to modern processor power than "miracles in software design). AI can work in a stand alone instance, and be no different than opening a picture on a laptop. The problem, the energy consumption, is that the "data centers" running this to facilitate the advertisement empires of Microsoft and Google are running literally, billions of instances of the same AI software to sift through, sort out, and put together the information (yours and mine) that is valuable to them. And of course they've got other uses for it too, they're sorting out medical shenanegans, genes, proteins, and various new, modern treatments, they're modeling human behavior to make "non intuitive" traffic intersections and highway interchanges to make life better, safer, less stressful, and ten thousand other things that human brains can't "wrap their head around". There is a good use case for it. As for now though, the big problem, and the metropolis sized power consumption is not inherant in the AI, it's just the scale at which corporate greed is putting it to use.
 
Machine learning may be the forerunner of AI, but AI has not advanced that far in the real world. One of my brothers has been working on AI for Google for several years. While it can do many things the old machine learning was incapable of the advances are primarily due to the speed and capacity of the processors.

Whether you like to believe it or not AI is in its infancy at best. You can ask it the same exact question 15 times in a row and get 15 different answers. In many cases the answers are contradictory or biased toward the programmer's agenda. At this point I have NO FAITH that AI can give accurate and unbiased answers to the simplest questions
 
Just for the sake of discussion, so AI is to complete the surveillance state? So insurance companies can collect information to charge you more? Like they already collect vehicle telematics to learn about you? Many times without your express consent? Or to further the ability for governments to spy on our every breath? Is this what we want to enable? Or something more beneficial?

Technology is a two edged sword, it can both be a liberator and an enslaver. So how do WE make sure which one it is?
 
Machine learning may be the forerunner of AI, but AI has not advanced that far in the real world. One of my brothers has been working on AI for Google for several years. While it can do many things the old machine learning was incapable of the advances are primarily due to the speed and capacity of the processors.

Whether you like to believe it or not AI is in its infancy at best. You can ask it the same exact question 15 times in a row and get 15 different answers. In many cases the answers are contradictory or biased toward the programmer's agenda. At this point I have NO FAITH that AI can give accurate and unbiased answers to the simplest questions
Your answer is telling me that AI is mostly smoke and mirrors at the moment. Or the hype of the day, meant to hopefully drive stock prices upwards. Might be the next big thing, someday, but it's not very good now.

I wouldn't trust it to design anything critical. Why? Because there's no one responsible! What if the AI design fails? Where's the restitution? It's not that AI is bad, but honestly, there's no legal framework for responsibility or liability should anything go awry. Neither government nor the law knows how it should be handled.

So when your AI controlled car runs someone over is it responsible or you? What if it took control from you? Or you were asleep! Who is responsible, the 20YRO whiz kid programmer with no real world experience, the car company, or?

Maybe we ought to think some of these things through before wholesale adoption. I'm not saying to stop it, but I am saying that most of the implications need to be reviewed.
 
Back
Top