
Agentic News
π Today's Agentic News
A curated selection of today's most important AI developments.
π Latest Research Papers
Research Papers: Showing 3 items. Latest academic research in AI and machine learning.
Reasoning Models Can Be Effective Without Thinking

Key Results
- β’ NoThinking consistently outperforms Thinking in terms of accuracy while using fewer tokens.
- β’ In low-budget scenarios, NoThinking achieves higher pass@k accuracy than Thinking.
- β’ Parallel scaling with NoThinking reduces latency significantly while maintaining or improving accuracy.
Key Insights
- β’ NoThinking approach bypasses explicit reasoning processes and can outperform traditional Thinking methods.
- β’ NoThinking shows better accuracy-cost tradeoffs, especially in low-budget settings.
- β’ Parallel scaling combined with NoThinking enhances performance and reduces latency.
PerceptionLM: Open-Access Data and Models for Detailed Visual Understanding

Key Results
- β’ PLM achieves competitive performance across 40 image and video benchmarks, comparable to state-of-the-art models.
- β’ The PLM-8B model outperforms existing models in fine-grained video question answering and video captioning tasks.
- β’ The introduction of PLM-VideoBench sets a new state-of-the-art in detailed visual understanding.
Key Insights
- β’ PerceptionLM (PLM) is a fully open and reproducible vision-language model for detailed visual understanding.
- β’ The paper addresses the lack of transparency in high-performing vision-language models by providing open-access data and models.
- β’ PLM includes a large dataset of 2.8M human-labeled video question-answer pairs and spatio-temporally grounded captions.
Memorization: A Close Look at Books

Key Results
- β’ Successfully auto-regressively reconstructed 'Aliceβs Adventures in Wonderland' with high similarity.
- β’ Extraction rates for popular books were significantly higher than for less popular or newer titles.
- β’ Fine-tuning improved extraction rates for instruction-tuned models, particularly for popular books.
Key Insights
- β’ LLMs can memorize training data, with extraction rates correlating with book popularity.
- β’ Instruction-tuned models show reduced memorization but can be partially restored through fine-tuning.
- β’ Lower transformer layers are primarily responsible for undoing regurgitation mitigations.
π» Trending on GitHub
GitHub Repositories: Showing 3 items. Most popular AI-related repositories today.
Byaidu/PDFMathTranslate

Key Features
- β’ Preserves formulas, charts, table of contents, and annotations.
- β’ Supports multiple languages and diverse translation services.
- β’ Provides command line tool, interactive user interface, and Docker support.
Shubhamsaboo/awesome-llm-apps

Key Features
- β’ Curated collection of LLM apps using RAG and AI agents.
- β’ Supports models from OpenAI, Anthropic, Google, and open-source alternatives.
- β’ Includes well-documented projects for learning and contribution.
elie222/inbox-zero

Key Features
- β’ AI Personal Assistant: Manages your email based on a plain text prompt file.
- β’ Reply Zero: Track emails that need your reply and those awaiting responses.
- β’ Smart Categories: Categorize everyone that's ever emailed you.
- β’ Bulk Unsubscriber: Quickly unsubscribe from emails in one-click.
- β’ Cold Email Blocker: Automatically block cold emails.
- β’ Email Analytics: Track your email activity with daily, weekly, and monthly stats.
π₯ HackerNews Highlights
HackerNews Posts: Showing 6 items. Top AI discussions from the HN community.
Gemma 3 QAT Models: Bringing AI to Consumer GPUs
Show HN: I built an AI that turns GitHub codebases into easy tutorials
Claude Code: Best practices for agentic coding
Maybe Meta's Llama claims to be open source because of the EU AI act
Welcome to the Era of Experience [pdf]
π― Reddit Discussions
Reddit Posts: Showing 8 items. Popular AI discussions across Reddit.
[R] Biologically-inspired architecture with simple mechanisms shows strong long-range memory (O(n) complexity)
The post discusses a new sequence modeling architecture inspired by biological principles, which shows strong long-range memory capabilities with O(n) complexity. The author shares preliminary results on various tasks, highlighting the architecture's simplicity and potential for improvement with further tuning. They seek feedback on additional tasks to evaluate the architecture's performance, particularly in areas requiring strong long-term memory.
China scientists develop flash memory 10,000Γ faster than current tech
Chinese scientists have developed a new type of flash memory that is 10,000 times faster than current technology, potentially revolutionizing data storage and processing speeds.
Why do people expect the AI/tech billionaires to provide UBI?
The post critiques the expectation that AI and tech billionaires will provide Universal Basic Income (UBI) as jobs are displaced by automation. The author argues that these billionaires prioritize their wealth and privileges over the welfare of displaced workers, citing their investments in bunkers and farmland as evidence of their disregard for societal issues. The post suggests that billionaires would prefer an apocalyptic scenario over equitable wealth distribution, challenging the naive belief that they will willingly support UBI.
Damned near pissed myself at o3's literal Math Lady
The post humorously discusses a moment involving OpenAI's model, referred to as 'o3', and its interpretation of a character known as the 'Math Lady', which elicited a strong reaction from the user.
I almost never thought this day would come...
The post expresses excitement about a significant milestone being reached, likely related to the release or development of a new AI model, as indicated by the link to Hugging Face.
China scientists develop flash memory 10,000Γ faster than current tech
Chinese scientists have developed a new type of flash memory that is 10,000 times faster than current technology.
"I stopped using 3.7 because it cannot be trusted not to hack solutions to tests"
The user expresses their decision to stop using version 3.7 of a tool, citing concerns that it cannot be trusted to provide reliable solutions for tests.
Citations have gone bad
The post discusses concerns about the reliability of citations in Perplexity, noting that recent changes have made it difficult to verify information sources. Users previously could see specific locations within papers, but now only the paper title is shown, leading to trust issues for academic work.
Found this digest helpful? Share it with your network!