Billionaire Mark Cuban on not learning A.I.: "You're fucked."

Is learning A.I. important


  • Total voters
    38

Plies

GAF's Nicest Lunch Thief and Nosiest Dildo Archeologist
On her podcast, Grede didn't ask Cuban about AI. She asked him about how to get started with a business idea. But the billionaire entrepreneur insisted that now, there's no difference between going from idea to execution and utilizing AI. You need the latter to do the former fast and well.

"The first thing you have to do is learn AI," Cuban responded. "Whether it's ChatGPT, Gemini, Perplexity, Claude, you've got to spend tons and tons and tons of time just learning how it works and how to ask it questions."

Noodling around with new tools and asking various AI models questions is how Cuban is spending his time at the moment. And he has no patience for founders and others in business who aren't doing the same.

"What do you say to someone who is like, 'I don't like AI. I don't want any more technology in my life'?" Grede asked. Cuban's answer was short, punchy, and profane: "You're f***ed."

Would you agree or disagree with his stance on learning A.I. ?
 
What does it mean learning AI?! Writing prompts? I'm confused.
Quoted from Mark Cuban:

"Whether it's ChatGPT, Gemini, Perplexity, Claude, you've got to spend tons and tons and tons of time just learning how it works and how to ask it questions."
 
Quoted from Mark Cuban:

"Whether it's ChatGPT, Gemini, Perplexity, Claude, you've got to spend tons and tons and tons of time just learning how it works and how to ask it questions."
Yeah I can read but learning how it works. Like what does it mean. I wrote a book on AI algorithms over ten years ago, how to ask it questions seem silly in comparison.
 
How could it take "tons and tons" of time to work out how to ask a basic question in a natural language? The models now even use insane amounts of compute time and extra tokens trying to reformulate your dumb question into something that gives a good answer.
 
What does it mean learning AI?! Writing prompts? I'm confused.
I think when people say "learning AI", they really mean "learn to know how to best leverage AI's current capabilities, and be able to use the results that AI gives you to streamline the best possible outcomes".

I work with a bunch of people over 70 years old (who really need to retire!). Some of the people working here have been doing the same thing on computers in a repetitive way for 40 years. I could show them how to open up chatgpt.com and ask it questions about how to improve their job efficiency, but getting them to understand that they can take the results that they're given and apply it to their everyday tasks would be like herding cats.
 
@grok is this true?
lol this is the best first response 😂

It's like asking your own government if they're telling the truth. How can people not see that relying solely on AI to give you your information is dangerous. I've see. Some crazy news stories recently of people asking AI for proper life advice and acting on it….sometimes in tragic ways. It truly is a brave new world.
 
How could it take "tons and tons" of time to work out how to ask a basic question in a natural language? The models now even use insane amounts of compute time and extra tokens trying to reformulate your dumb question into something that gives a good answer.
I think you're underestimating how stupid some people are. Imagine the average person, and then half of them are dumber than that 😂
 
what does it mean learning AI? AI will be the new 'excel', will be integrated every where but in current form no matter how much scaled up, it will not be a replacement of humans anytime soon, it can make me more efficient but cant replace me (I am a senior software engineer with 20+ years of experience)
 
Depends on what you do. Doubt you need AI for the trades like welding or oil fields.

You need it for Engineering.


I went to the Dentist today and they used AI to look for Cavities...... Found none.
 
I think you need to learn what AI can do at a minimum.
I can see what he means about prompts just by the image thread we have here, and how people are able to get it to produce the type of images they want by carefully crafted and usually very long prompts. A lot of that is just to circumvent celebrity and nudity filters - still the same would hold true for other stuff - like getting the AI to be suitably pessimistic about a business idea.
 
No winning move there.
No one is "learning how to use A.I.", what we're doing is "training it better and making it stronger".
Eventually it will outperform the best of us.
And this being a multipolar trap, we have no choice but to keep doing it.
What happens next ? Who knows.
 
"learn AI"

93rqv0.png
 
He's right. lol at going into the future knowing nothing about AI. You'll be a fucking dinosaur stuck in the Stone Age.

This would be like knowing nothing about the internet
 
I feel like simply learning how to prompt to do your job better/faster is barely scratching the surface. IMO AI is going to create an even bigger divide between people who eat, sleep, and breathe their jobs and people who just want to collect a paycheck.

Unfortunately, I'm in the latter category.
 
Can you teach me AI, OP?
That's a great question! It seems there are a few different AI products with similar-sounding names.

If you are interested in "learning" an AI, many generative AI platforms allow you to refine and customize their responses by providing detailed prompts and instructions. This is often referred to as "prompt engineering" and is a great way to guide an AI to produce the specific results you're looking for.
 
I think AI is an important tool, but not as important as the current valuations on the stock market list it as. If they hit a cliff (which it seems like they have), this is capped at maybe a $20/month product. All of these various startups that are just wrappers around the main guys are all doomed for sure.
 
I've played with it a little bit. My problem is that none of my tasks are easily automatable. I'm more of a jack-of-all trades than a SME. My job (a mixture of people, process, and project management) is largely about overseeing the work done by several different teams, verifying everything is running smoothly, running meetings, and picking up tasks to fill the gaps when needed. I'm paid to be responsible for the larger picture and delivering on larger goals.

I was an English major in school and I type 90+ wpm, so I don't need help writing. I've already got templates for any repeatable text processes I use, and I could just google to find new templates if I need them. So I just haven't found a way to integrate it into my work life. I don't see any examples where it could save me time.

From a personal standpoint, I don't have any interest in talking to a computer via voice. I've never used stuff like SIRI or Alexa. Typing with a chat bot feels like an adjacent activity. When I also see all the examples of AI hallucinating and getting shit wildly wrong, it leaves me distrustful of it's results. Add to that all this oppressive pushing by tech bros and the whole thing has been a real turn off for me.

Finally, every time I've done some digging on the economics of this whole thing, it seems pretty clear it's wildly unprofitable. It seems The tech companies are all shoveling money into this furnace and projecting over-confidence that it's all going to work out and be the next internet. Some estimates put the actual cost at roughly 2.5x the current subscription prices, and that's an average. Clearly there are power users who are running requests that could go 10x that. It sure seems like they've constructed a bubble and the plan is just to push it until it pops, then the top leaders will take their golden parachutes. I am reluctant to make myself dependent on a technology that may easily shoot up in cost over the next few years.
 
Would you agree or disagree with his stance on learning A.I. ?
I've said this to people in work. My attitude is that using AI is that it's going to have the same effect as using Google 25 years ago. It'll make you a better developer but if you think you can give it to a complete novice and have them be a top flight developer you're fooling yourself. (Oh and the good developer will learning by using it over months and years, kind of like learning to use Google.)
 
Outsourcing cognitive thought process and critical thinking - especially during critical developmental periods - sounds like a winning solution. What could go wrong.....

Its really fucking simple. Inteligent people who already prioritize critical thought over expediency, who leverage AI with proper human development, will benefit from AI. The rest, who deliver a chatgpt resume and never even read it upon submission.....theyll just become even more retarded. I wonder which way society will lean as a whole.....oh, thats right, i dont wonder.
 
Last edited:
The craziest part is that you can literally ask ai to help on how to use ai and it's a remarkably effective way to start learning. It will teach you about effective prompts and what to expect.
 
The Ai bubble will pop. There's not enough original data to scrape so it's going through what's called a "ouroboros effect". Companies that are using Ai aren't making a whole lot money off it either. My job uses an Ai to grade calls and it's not working out at all. And someone is getting all that data. If you have some big idea to make money, chances are someone is going to have access to that data and steal your idea.
 
For anyone for basic, non-critical things, AI can be very handy. Dumb and ignorant people won't recognise hallucinations, but for these things it won't really matter. Stuff like finding that word you just can't remember, or getting some recipe ideas.

However, for things where hallucinations could be catastrophic if acted on, AI is very dangerous for the less intelligent and the laziest of our species.

If you make sure you are aware of the potential for hallucinations, and don't take AI output as sacrosanct, then it can be a very powerful tool. Say, for proofreading a paper or document, sketching out a rough idea from disparate thoughts, or doing some preliminary calculations.

These are all things that need to be checked by at least one human before acted on though, and that's where the problems start.

I use it a lot to practice Japanese. It probably would do a good job of teaching it from scratch (better than a lot of humans!), but there would also be a lot mistakes and bad habits instilled. However, as I have a decent knowledge of Japanese, I can indentify when something is off and go check it from a authoritative source (asking AI to check, another instance, or another AI can be helpful too).

On the other hand, I used AI to write a sentence is Welsh today. I know zero Welsh, and my insult was probably not a good translation.
 
Last edited:
The Ai bubble will pop. There's not enough original data to scrape so it's going through what's called a "ouroboros effect". Companies that are using Ai aren't making a whole lot money off it either. My job uses an Ai to grade calls and it's not working out at all. And someone is getting all that data. If you have some big idea to make money, chances are someone is going to have access to that data and steal your idea.
We're never going to be in a position where we run out of data to analyze. Reams of new, original data is created every hour of every day. For AI to be effective in using it the data has to be sanitized and curated. A lot of AI projects fail because they're fed garbage data and people expect AI to automagically turn that garbage into gold.

I don't believe there's an AI bubble in general. A lot of solutions that companies are being sold as Ai are not actually AI. They still use the same canned workflows they've used for years with maybe a bit of RAG thrown in. Companies that are slapping an AI label on their non-AI products to try to boost sales are creating their own bubble that will absolutely burst. But the creation and effective usage of large language models and reasoning models isn't going to be impacted by that marketing bubble bursting.

When put to use for real knowledge work AI can be very effective as long as there's a skilled human with their hands on the reigns. Generative AI is transforming how a lot of people work. We're just now starting to see possibilities of agentic AI powered by reasoning models. People should absolutely learn to use these tools if they work IT or data science/analytics jobs. They enable faster work with fewer errors, and it's becoming difficult for people to outwork the machines. So you may as well build your own AI robot army and be at the front of the wave.
 
Last edited:
What does it mean learning AI?! Writing prompts? I'm confused.
You'd be surprised how many people don't have a clue how to even start with Ai, or are completely incapable of phrasing a prompt to get a half decent response. People are way more incompetent as a whole than I think most of us realize. Those people, who skirt by on button pushing skills and not opening themselves up to learning new things are the ones who are going to get left behind fast.

I personally think AI is in a bubble and still at the tech phase where everything seems possible and it's the 2nd coming, soon we'll hit the phase where their will be a pushback and the tech will be called a fad or whatnot, values and demand will drop slightly, than things will equalize and the real sustainable growth will hit. It's how these things always go, AI Is just happening faster than most.
 
"Learning AI" is just going to be one of those grifty things like courses on how not to be incel or how to sell things on eBay but call it a business. The less gullible of us are just going to type prompts and copy and paste shit into a textbox.
 
How could it take "tons and tons" of time to work out how to ask a basic question in a natural language?
Despite all the hype the models are far from really well interpreting natural language, you need to guide and limit them to get what you want. I will use an example:

Shit prompts: write me a prospection email for selling X
Good prompt: write me a prospection email for selling x to y persona. Be direct and factual.

Already doing that will get you way better results, and it only goes from there.
 
Despite all the hype the models are far from really well interpreting natural language, you need to guide and limit them to get what you want. I will use an example:

Shit prompts: write me a prospection email for selling X
Good prompt: write me a prospection email for selling x to y persona. Be direct and factual.

Already doing that will get you way better results, and it only goes from there.
Sheesh, this sounds like learning a new language.
 
Top Bottom