AI is not going to kill the planet

AI is not going to kill us because, frankly, the current ML-based technologies are just not that smart and, in my view, can never equate to human intelligence in the broadest sense. This is a terrific article and very well-stated commentary. Very much worth a read in these hyperbolic clickbait times.

One of my favourite quotes from a recent AFR article: “I think it’s a way of controlling the narrative,” said Sasha Luccioni, a research scientist at AI start-up Hugging Face. “We should be talking about legislation, things like the risks of bias, the ways AI is being used that is not good for society,” she said in an interview on Bloomberg Technology. “But instead we are focusing on the hypothetical risks. It is a bit of a [magic] hat trick.”

Couldn’t agree more.

Taking “full” responsibility?

I have to admit that I find it difficult to process the idea of people who “take full responsibility” for things going wrong with no consequences at all attached. Is that really taking responsibility?

It used to be you’d lose your job, maybe resign, or face some other kind of pain if you were responsible but nowadays, whether it’s politics or business, people take full responsibility whilst barely missing a beat.

I remember once fronting up to a boss to take responsibility for a terrible mistake one of my people had made under my watch and I fully expected to be on my bike. He graciously gave me a second chance but I was not expecting that.

But, take a close look every time you hear that phrase in the media these days and you’ll almost always find that it means nothing at all.

To quote one of my favourite lines from one of favourite movies… that word you are using, I do not think it means what you think it means…

Sometimes the old ways…

This is a really insightful article which I think articulates some important ideas about the real value of technology very nicely, especially when we consider the potential impact of things like ChatGPT. Strangely, it got me thinking about the value of “good old fashioned” approaches which we’ve often shelved a little too quickly in the face of new technology that actually didn’t improve things. Worth a read.

Tick Tock… Boom

This article makes a great point. When legislators clearly don’t seem to have any understanding of the technology they want to regulate, they sound stupid and are hard to take seriously. Australia is not immune – I’ve read and heard some doofus comments by key government people on cybersecurity recently that make me cringe because of the ignorance of people who should know better.

Get your ChatGPT game on

A lot of people have naturally been giving ChatGPT a go with varying results. What becomes quickly obvious is that the quality of your prompt/question to the tool heavily determines the quality of the response. For those of you struggling, here’s a tip I picked up along the way… simply get ChatGPT to coach you by cutting and pasting this into the prompt section and iterate until you’ve got something that looks on the money. Tell ChatGPT you’re done and then use that prompt you’ve developed together:

“Become my Prompt Creator to help craft the ideal prompt for ChatGPT. Follow these steps: 1. Ask for the prompt topic. 2. Create 3 sections: a) Revised prompt, b) Suggestions, and c) Questions. 3. Iterate with my input until I confirm completion.”

This will massively improve your ChatGPT game.

Interfaces Matter

When it comes to interfaces, simple things can make a big difference. I think the Hyundai approach here is instructive. Certain interface mechanisms have survived because they work. Replacing them with more technologically advanced mechanisms doesn’t always end up with a better result. I definitely see this with my Apple Watch versus my Garmin for running… the buttons are far superior.