Let’s be honest about what really scares us about AI like ChatGPT.

We’re not really scared of AI. We’re scared of humans.

❗ We’re scared that people will discount human knowledge and intelligence, despite it being the foundation.

❗ We’re scared because there aren’t and may never be any incentive or means to attribute value to individuals.

❗ We’re scared that people won’t put up guardrails in time to protect us in a world that doesn’t need us.

(And IMHO we should be.)

But at the end of the day, AI is a tool. Even the most powerful tools are not inherently good or bad, and banning their use in some places can’t be more than a short-term solution.

Tools can malfunction, but usually it’s the humans we need to worry about.

I was glad to hear that Stability AI will allow artists to opt out of having their work being used to train AI. But that’s probably not a sustainable strategy.

What can we do to increase the likelihood that AI will be a net benefit?

• Educating the public on how to work WITH it

• Discourse around benefits, downsides, and appropriate use

• Educating decision-makers on the nature of ML models

• Come up with a scalable system of attribution for input data

• ?



On a tangent, this is why I love reading sci-fi. 👽

A lot of sci-fi isn’t so much about weird fictional technologies or space or even aliens, it’s actually about *people*.

How would humanity react? What kinds of decisions would it make and with what potential consequences? These sorts of questions.