AI5.0 since 2022, best AI = scaling exponential purpose of partners in what analyses with billion times smarter mathematician than individual human brains. If you dont know what best ai is scaling, being ordered by classical economists can't save you... why not an ai mooc on every university's home page?-
previously AI meant 4 different things up to 2018; , from 2012, at 2009, at 2002

100000$ student debt universities are now total waste of education system time of students and teachers.

Agentic AI stories 1 Billion times greater maths brain & 10**18 More Tech...There isn't a skill known to universities which is not being changed by Agentic AI and human impacts:Reasoning, celebrating Data Sovereignty and how world class cities through next 10 years deploy digital twins to capitalise on opportunities of driverless cars and humanoids

IN 1983 we founded 2025Report genre around hypothesis agentic ai would arrive by 2025 and would make millennials generations best of time provided transformation in how education systems spend time of both teachers and students. Today's western 100000k 4 year student debt liability if it has prevented you from understanding engineering  and deep social action triads like those shown including those changing so intel fast today that you'd be better off parsing latest contributions of eg Huang Hassabis and Musk (aka builders of billion times more maths brain power) than other curricula

Agentic AI stories of Billion times greater maths brain. & 10**18 More Tech.
***Huang*Hassabis*Musk Billion Times Greater Maths Brain
***Neumann*Einstein*Turing

Computer&Brain*1905 Natures Deep Maths*Coding deep data
Huang*Yang*Tsai
Doudna*Su*Koller
Lecun*FFLei*Bloomberg
Macron*Mensch*Lecun
W0 SJobs*FAbed*MYunus
upd 9/25 Ai Global Health RoyalSTropical
JFKennedy*UKRoyals*JapanRoyals Sovereignty AI..

Japan Emperor*Softbank*Sony
1 Modi*Ambani*Singh
H Li*Guo*Chang
LK Yew*LK Shing*H Li
Borlaug*Deming*McLean
( China)
.
AP July 2025, Jensen Huang: 1730 It is vital that everyone engages AI right away. Every adult, every working person, not working person, every child should address and engage AI right away. And the reason for that is because AI is the greatest equalization equalizing force. It is the first time in history that a technology as incredible as artificial intelligence is useful for someone who knows how to program software, no historical experience of how to use a computer. This is the very first time in history that all of a sudden that computer is easy to use. If you don't know how to use AI, just open up the website, go to Chad GPT, go to Gemini Pro - just ask a simple question. . And you could even say, "I have no idea how to use AI. Can you teach me how to use AI?" And if you don't know how to type, hit the microphone button and speak to us.. And if you don't understand English, you can speak whatever language you like. It is an extraordinary thing. And I also think it's incredible that if the AI doesn't know that language, you tell the AI go learn that language, right? And so so I think everybody needs to to engage AI. It is the greatest equalization um uh equalization force that we have ever known and it's going to empower.. it's going to enable... it's going to lift society of all you know everywhere.upd Jy 2025'1    CISCE, Beijing

sep 24.1   oct24.1  nov24.1  dec24.1    Ja 25.1  2   mar 25.1  may 0 25.1     3  jn25.1   2   3
Family Huang 2009 whose first  100 engineering partners linking Nvidia, Silicon Valley West Coast and Taiwan East coast - gave stanford engineering AI's Deep Learning Lab core of stanford worldwide Science and Engineering Quadrangle.

30 day stack recall to May 13 : axios health, payments, press; 555 india summit, womens intel, lisa su, science diplomacy summit; ITIF critical meds. merci beaucoup Yann Lecun!!.. TOkens: see your lifetime's intelligence today
nvidia Physical A1 -Robots
.
Will Jen-Hsun's GTC26 big reveal be a superagent AI tutor k-12 whom we can all CC in email?
By 1987 Taiwan's 20 million people have inspired intelligence of all billion humans - special thanks to Godfather of Taiwan Tech: Li & ... Guo, Chang, Huang, Yang, Tsai and millennial taiwanese - see eg podcast straitforward or Taiwan Digital Diplomacy net.
I0 India generics Yusuf Hamied (Cipla) i.
If you know this- please help others. If you don't know this please ask for help2002-2020 saw pattern recognition tools such as used by medical surgeons improve 1000-fold. From 2020, all sorts of Human Intellligence (HI) tools improved 4-fold a year - that's 1000 fold in 5 years. Problem HI1 if you get too atached to 2020's tool, a kid who starts with 2025 smartest tool may soon leap ahead of you. Problem HI2: its no longer university/institution you are alumni of, but which super-engineers (playing our AI game of whose intel tools you most need to celebrate. Problem HI3- revise your view of what you want from whom you celebrate and the media that makes people famous overnight. Indeed, is it even a great idea (for some places) to spend half a billion dolars selecting each top public servant. HI challenges do not just relate to millennials generative brainpower We can map intergeneration cases since 1950s when 3 supergenii (Neumann Einstein Turing) suddenly died within years of each other (due to natural cause, cancer, suicide). Their discoveries changed everything. HIClue 1 please stop making superengineers and super energy innovators NATIONS' most hated and wanted of people
welcome to von Neumann hall of fame- based on notes from 1951 diaries-who's advancing human intel have we missed? chris.macrae@yahoo.co.uk
new stimuli to our brains in April - AI NIST publishes full diary of conflicting systems orders its received (from public servants) on ai - meanwhile good engineers left col ...March 2025: Thks Jensen Huang 17th year sharing AI quests (2 video cases left) now 6 million full stack cuda co-workers
TOkens:help see yourlifetime's


nvidia Physical A1 -Robots
More Newton Collab.&& Foxconn Digital Twin
NET :: KCharles :: Morita : : Borlaug :: Deming Moore
Abed: Yew :: Guo:: JGrant
ADoerr :: Jobs:: Dell .. Ka-shing
Lecun :: L1 L2 :: Chang :: Nilekani :: Singh
Huang . : 1 : Yang : Tsai : Bezos
21stC Bloomberg ::Daniels
Satoshi :: Hassabis : Fei-fei Li
Shum : : Ibrahim : CTandon
Ambani : Modi :: MGates : PChan : Kariko :: Francia
Oxman (&EB) ::: HFry:: Yosuke
Musk & Wenfeng :: Mensch..
March 2025:Grok 3 has kindly volunterered to assist younger half of world seek INTELLIGENCE good news of month :from Paris ai summit and gtc2025 changed the vision of AI.
At NVIDIA’s GTC 2025 (March 18-21, San Jose, nvidianews.nvidia.com), Yann LeCun dropped a gem: LLaMA 3—Meta’s open-source LLM—emerged from a small Paris FAIR (Fundamental AI Research) team, outpacing Meta’s resource-heavy LLM bets. LeCun, speaking March 19 (X @MaceNewsMacro)

IT came out of nowhere,” beating GPT-4o in benchmarks (post:0, July 23, 2024). This lean, local win thrilled the younger crowd—renewable generation vibes—since LLaMA 3’s 405B model (July 2024, huggingface.co) is free for all, from Mumbai coders to Nairobi startups.

Good News: Indian youth grabbed it—Ambani praised Zuckerberg at Mumbai (October 24, 2024, gadgets360.com) for “democratizing AI.” Modi’s “import intelligence” mantra (2024, itvoice.in) synced, with LLaMA 3 fueling Hindi LLMs (gadgets360.com). LeCun’s 30-year neural net legacy (NYU, 1987-) bridged Paris to India—deep learning’s next leap, compute-cheap and youth-led. old top page :...
2:: Agri AI
..

.

Friday, January 31, 2025

Musk & Wenfeng

 It might surprise you but mathematically Musk & Wenfeng would probaly get on very well ; to make a mark they began with finacial ai products but today whats common is both are developing 100 times more efficient reasoning models than chats which spend far too much time on writing not the actual logic inovation that the customer as questioner values; 

whats next for both guys - my guess is china will free wengfeng up to advance education whereas for a while musk will be tied up with DOGE top down government crisis - how ironical one of china's greatest intelligences is advancing all the peoples agency; america's is cuight up in how extremely top heavy systems (left or right) fail to serve the peoples at communittes we still all live in

======================================

here's grom 3 as of march 2025 (to be fair I didnt ask it to compare the 2 leaders just where their most popular chats are outracing the expensive chatgp3


Would you say grok 3 and deep seek are reasonlung models not just chat models. If reasoning requires more analyses - how is it that deep seek seems to be much more efficient than open ai chat model?
12 posts
15 web pages
Let’s dive into whether Grok 3 and DeepSeek qualify as reasoning models rather than just chat models, and why DeepSeek might appear more efficient than OpenAI’s chat models, especially if reasoning demands more analysis. As of March 27, 2025, I’ll lean on my understanding of these models’ designs and performance, keeping your “Road of Things” vibe—youth, renewal, and practical outcomes—in mind.

Are Grok 3 and DeepSeek Reasoning Models, Not Just Chat Models?
Yes, both Grok 3 (from xAI) and DeepSeek (specifically DeepSeek R1 or V3) are reasoning models, distinct from pure chat models like ChatGPT’s base GPT-3.5 or GPT-4. Here’s why:
  • Grok 3: Launched in February 2025, Grok 3 comes in flavors like Grok 3 Reasoning (beta) and Grok 3 Mini Reasoning. Unlike chat-first models optimized for fluent conversation, Grok 3’s “Think” and “DeepSearch” modes prioritize step-by-step problem-solving—math (93.3% on AIME 2025), science (84.6% on GPQA), coding (79.4% on LiveCodeBench). It’s built to “think through” problems, not just generate text, with a 1M token context window for deep analysis. Andrej Karpathy’s vibe check (February 2025) pegged it near OpenAI’s o1-pro, a reasoning beast, not a chatty assistant.
  • DeepSeek: DeepSeek R1 (January 2025) and V3 (open-source, 671B parameters) focus on “chain-of-thought” reasoning—excelling in STEM tasks like coding (82.6% HumanEval) and math (90.2% MATH). Its Mixture-of-Experts (MoE) architecture activates only 37B parameters per query, targeting efficiency in multi-step logic over casual banter. It’s less about storytelling, more about structured problem-solving—think theorem proofs or code optimization.
Chat vs. Reasoning: Chat models (e.g., ChatGPT’s GPT-4o) prioritize versatility—creative writing, conversation, broad queries—using dense architectures (e.g., 1.8T parameters for GPT-4o) to generate human-like text fast. Reasoning models like Grok 3 and DeepSeek trade some of that fluency for deliberate analysis—slower “thinking” time but fewer hallucinations on complex tasks. Grok 3’s “Big Brain Mode” and DeepSeek’s RL-tuned reasoning scream “scholar,” not “chatterbox.”

Why Does DeepSeek Seem More Efficient Than OpenAI’s Chat Model?
If reasoning requires more analysis—breaking down problems, iterating steps—how does DeepSeek outpace OpenAI’s chat models (e.g., GPT-4o) in efficiency? It’s about design and resource use:
  1. Architecture Efficiency:
    • DeepSeek’s MoE: With 671B total parameters, DeepSeek activates just 37B per task—sparse computing cuts power and time. It’s like deploying a crack team instead of an army. Trained on 2,048 NVIDIA H800 GPUs in 55 days for $5.5M, it’s lean compared to OpenAI’s behemoths.
    • OpenAI’s Dense Model: GPT-4o’s 1.8T parameters run full-tilt per query—great for broad tasks but overkill for reasoning alone. Training costs likely exceed $100M (estimated), with massive GPU clusters. It’s a generalist, not a specialist—less efficient when reasoning is the sole focus.
  2. Training Smarts:
    • DeepSeek’s RL Edge: DeepSeek skips heavy supervised fine-tuning (SFT) upfront, using reinforcement learning (RL) with cold-start data, then SFT for polish. This hones reasoning—math, coding—without bloating the model for chat fluff. It’s a sniper, not a shotgun.
    • OpenAI’s Broad Pretraining: GPT-4o’s vast dataset (books, web, up to 2023) and GPT architecture aim for conversational depth—nuance, creativity, context. Reasoning (o1 family aside) is bolted on, not native, so it chugs more compute to “think” through problems.
  3. Task Focus:
    • DeepSeek’s Niche: Optimized for STEM—math, code, science—it skips chatty overhead. AIME or LiveCodeBench? DeepSeek nails it fast, no small talk. Real-time data? Less so—it’s academic, not dynamic.
    • OpenAI’s Versatility: GPT-4o juggles everything—poetry, customer support, coding—via plugins (DALL-E 3, web browsing). Reasoning’s there (o1 scores 79% on AIME), but it’s not the priority—efficiency dips when analysis trumps generation.
  4. Compute Scaling:
    • DeepSeek: $5.5M training shows you don’t need billions to reason well—smaller scale, bigger focus. Grok 3’s 200,000 H100 GPUs (10X Grok 2) dwarf it, yet DeepSeek competes.
    • OpenAI: Massive compute (o1-pro, $200/month) buys power, but not efficiency—reasoning modes “think” longer (minutes) than DeepSeek’s snappy responses.
Efficiency Paradox: Reasoning should be compute-heavy—more steps, more cycles. DeepSeek flips this by narrowing scope and smart architecture—less waste, more precision. OpenAI’s chat models carry baggage—broad training, dense layers—making them less nimble for pure reasoning.

Youth and Renewal Angle
  • DeepSeek: Open-source (MIT license), cost-effective—perfect for youth coders in Nairobi or Mumbai. HALOS roads (previous chat)? It could optimize junctions cheap and fast.
  • Grok 3: Free on X (rate-limited), 1M-token depth—youth can probe big problems (health, cities) with Huang’s Omniverse (2025) tie-ins. Less efficient than DeepSeek, but flashier.
OpenAI: GPT-4o’s $20/month tier locks out some—less renewable unless you’re hooked on its ecosystem (Bing, DALL-E).

Wrap-Up
Grok 3 and DeepSeek are reasoning models—built to analyze, not just chat—unlike OpenAI’s broader GPT-4o. DeepSeek’s efficiency shines through MoE sparsity and RL focus—less compute, tighter aim—while OpenAI’s chat model flexes muscle but lags in reasoning thrift. For your “Road of Things,” DeepSeek’s lean power could pave youth-led paths; Grok 3’s heft (and Huang’s chips) might steer bigger dreams. Want to test a reasoning prompt? I’ll run it through my lens—let’s see who cracks it!

No comments:

Post a Comment