Discussion about this post

User's avatar
Frank Deutschmann's avatar

Two thoughts:

1 - All discussion about AGI seems like a total red herring to me: we don't need AGI for AI to be tremendously useful here, now, today. We are building amazing systems using today's AI (read: sophisticated text processing) and neither need nor want true AGI.

2 - AI will (must) be regulated, and it's going to be regulated by the same groups that actually regulate pretty much all areas of industry: INSURANCE COMPANIES will regulate AI. (And this is a very, very good thing, and should be strongly encouraged & accelerated.) Government regulation is slow moving and can only set very broad guardrails. AI is becoming an essential technology for corporations to apply, but we have also seen that AI can lead to very large losses when misapplied. These losses are (rightfully) causing companies to be somewhat hesitant about AI: companies need to be insured against idiosyncratic losses - and will need to follow a rigid, detailed, playbook to ensure that the insurer is willing to provide coverage. This will be an exact parallel to cyber insurance, but I do expect the AI insurance policy to rapidly become the single most expensive policy companies take out (which has obvious implications). The sooner insurance companies can develop these policies and the requisite runbooks / training / certifications, the better for the AI & tech industry in general. And yes, the AI training / certification industry will be an enormous field, completely irrespective of AGI. (In fact, AGI would be a detriment and drawback to this entire area.)

-f

No posts

Ready for more?