Key OpenAI DevDay Announcements:

GPT-4 Turbo, new cheaper model with massive 128K token context, 3× cheaper input, 2× cheaper output than GPT-4

Assistants API: essentially automates part of the vector-based context-management that everyone's been manually coding up to now. Nice, but not the huge upset to AI agent startups that many were expecting. Actually not all that huge at all.

Custom models: OpenAI finally openly admits that fine tuning the last layer is different from, and vastly inferior to having control over full model training, and says they'll be picking the few lucky winners who will be allowed to do the real, full custom training. I.e. fine tuning is vastly inferior to true custom models.

Verdict:

Appears the only startups who got wrecked today are the countless vector-database-as-a-service startups.

Not as bad as many startups expected.

Rest of the AI startups can rest easy, for now.

As far as features, 128k context is massive, potential gamechanger, but the turbo models in the past have been heavily crippled, so let's see how well this new one does.