Tag: LLM

Title - AI Learns to Think Fast

AI Learns to Think Fast: DeepSeek Running for the Masses

The currently over-hyped GenAI hysteria backed by bloated AI infrastructure implementations will predictably cause big problems:

  1. Massive 100+ billion node models use a boatload of power for both training and inference. MS and Amazon (et.al.) all publicly want to build
image of anime character in pilot uniform standing in front of large mech

Private AI, We Salute You!

So VMware made the case this morning that data privacy is the big obstacle in the way of enterprises leveraging new generative AI solutions (e.g., ChatGPT). If “private AI” is so important especially with generative AI (genAI) large language models …