Daily Issue
Vol. I — No. 5
30 · 04
Thursday, 30 April 2026
Generated 2026-04-30 11:16
google/gemini-2.5-flash-lite-preview-09-2025
所有的偶然,不过是想要遇见你。 — 网络 51 items · 4 sections
§ 0

The Morning

Local weather 1
This morning in
London
Clear sky
Today's range
19.4°11.2°
currently 18.0°
Feels
15.0°
Rain
0%
Wind
18 km/h
Humid
35%
Rise
05:33
Set
20:21
§ I

US Stocks

Pre-market signal radar 12
US pre-market radar
premarket 2026-04-30
2 Bullish
0 Bearish
10 Neutral
Sector Tape
Foundry 2 names
69 Top: INTC · Neutral · RS +18.9% Bullish 1 / Bearish 0 / 5d +23.4%
Manufacturing 4 names
65 Top: SANM · Neutral · RS +0.8% Bullish 0 / Bearish 0 / 5d +2.9%
Servers and Thermal Management 2 names
64 Top: VRT · Neutral · RS -4.6% Bullish 0 / Bearish 0 / 5d -1.9%
Networking Equipment 4 names
62 Top: APH · Neutral · RS -4.9% Bullish 1 / Bearish 0 / 5d -4.2%
Hyperscale Cloud 4 names
58 Top: AMZN · Neutral · RS -1.1% Bullish 0 / Bearish 0 / 5d -2.1%
Energy Infrastructure 1 names
57 Top: VST · Neutral · RS -4.4% Bullish 0 / Bearish 0 / 5d -1.3%
Battery and Energy Storage 3 names
53 Top: EOSE · Neutral · RS -15.9% Bullish 0 / Bearish 0 / 5d -13.8%
Compute Mining 4 names
52 Top: WULF · Neutral · RS -0.4% Bullish 0 / Bearish 0 / 5d -9.8%
Ticker Setup Move Score Evidence Quality
FLEX Flex Ltd Manufacturing
Neutral Sector tailwind Low confidence
+0.9% $91.39 5d +6.1%
65 sector positive RS +4.0%

Watchlist item from +0.9% vs previous close, positive sector tape, 3 recent headline(s).

Vanguard Group Inc. Buys 25,662,586 Shares of Flex Ltd. $FLEX - MarketBeat Needs fresh price/news confirmation before becoming an actionable setup.
quote: delayed fallback news: fresh financials: fresh news: 3
VRT Vertiv Holdings Servers and Thermal Management
Neutral Gap up + news Low confidence
+1.3% $310.00 5d +0.3%
65 sector positive RS -2.4%

Watchlist item from +1.2% vs previous close, positive sector tape, 3 recent headline(s).

Comerica Bank Trims Stake in Vertiv Holdings Co. $VRT - MarketBeat Needs fresh price/news confirmation before becoming an actionable setup.
quote: delayed fallback news: fresh financials: fresh news: 3
CLS Celestica Manufacturing
Neutral Gap up + news Low confidence
+3.8% $390.75 5d -6.3%
63 sector positive RS -8.3%

Watchlist item from +3.8% vs previous close, positive sector tape, 3 recent headline(s).

Celestica (TSE:CLS) Stock Rating Upgraded by TD - MarketBeat Needs fresh price/news confirmation before becoming an actionable setup.
quote: delayed fallback news: fresh financials: fresh news: 3
FN Fabrinet Manufacturing
Neutral Gap up + news Low confidence
+1.9% $655.84 5d -6.8%
61 sector positive RS -8.8%

Watchlist item from +1.9% vs previous close, positive sector tape, 3 recent headline(s).

Vanguard Group Inc. Has $1.85 Billion Position in Fabrinet $FN - MarketBeat Needs fresh price/news confirmation before becoming an actionable setup.
quote: delayed fallback news: fresh financials: fresh news: 3
quotes: nasdaq 24 24/24news: google_news_rss 24 24/24filings: sec 24 24/24, fallback 24

Generated from public market data and news for research and education. Not financial advice; data may be delayed, incomplete, or wrong.

§ II

From the arXiv

arXiv preprints 10 of 20
cs.AIarxiv:2604.26522v1Lead article

AGEL-Comp: A Neuro-Symbolic Framework for Compositional Generalization in Interactive Agents

Mahnoor Shahid, Hannes Rothe

GEL-Comp is a neuro-symbolic framework designed to improve the compositional generalization of LLM agents in interactive settings. It achieves this by integrating a dynamic Causal Program Graph (CPG) as a world model, an Inductive Logic Programming (ILP) engine to learn new symbolic rules from experience, and a hybrid reasoning core that uses an LLM for planning validated by a Neural Theorem Prover. This architecture enables agents to robustly deduce plans and abductively expand their symbolic knowledge base through interaction.

The AGEL-Comp neuro-symbolic architecture.
The AGEL-Comp neuro-symbolic architecture.
Boxplot of violation rates across model families ( n n indicates the number of models per family). Families are ordered by median violation rate in descending order. All individual model names are labeled.
Boxplot of violation rates across model families ( n n indicates the number of models per family). Families are ordered by median violation rate in descending order. All individual model names are lab…
cs.AIarxiv:2604.26577v1

Benchmarking the Safety of Large Language Models for Robotic Health Attendant Control

Mahiro Nakao, Kazuhiro Takemoto

This paper introduces a novel dataset of 270 ethically-grounded harmful instructions to benchmark the safety of 72 Large Language Models (LLMs) controlling a simulated Robotic Health Attendant. The core contribution is demonstrating a high average violation ra…

cs.AIarxiv:2604.26557v1

DUAL-BLADE: Dual-Path NVMe-Direct KV-Cache Offloading for Edge LLM Inference

Bodon Jeong, Hongsu Byun et al.

DUAL-BLADE is a dual-path KV-cache offloading framework for edge LLM inference that dynamically routes KV tensors to either a standard page-cache path or a low-overhead NVMe-direct path based on memory pressure. The NVMe-direct path bypasses the kernel by dire…

LLM transformer architecture [ 37 ] .
LLM transformer architecture [ 37 ] .
Domain distributions of website sources (a), questions before resampling (b), and questions after resampling (c).
Domain distributions of website sources (a), questions before resampling (b), and questions after resampling (c).
cs.AIarxiv:2604.26733v1

FutureWorld: A Live Environment for Training Predictive Agents with Real-World Outcome Rewards

Zhixin Han, Yanzhi Zhang et al.

FutureWorld introduces a novel live agentic reinforcement learning environment specifically designed for training predictive agents. Its core method is closing the training loop by continuously providing prediction tasks based on unfolding real-world events, r…

cs.AIarxiv:2604.26841v1

Language Diffusion Models are Associative Memories Capable of Retrieving Unseen Data

Bao Pham, Mohammed J. Zaki et al.

This paper demonstrates that Uniform-based Discrete Diffusion Models (UDDMs) function as Associative Memories (AMs) with emergent creativity. The core method involves showing that these models form basins of attraction around training data, not through an expl…

Basins around training examples shrink and basins around test examples expand as the training dataset size increases . (A) Textual examples showing two Tiny UDDMs’ token recovery at noise level t = 0.2 t=0.2 , where each is trained on two different training dataset sizes. With a small training dataset, the model fails to recognize unseen test tokens and alters them. With a larger training set, these unseen tokens however become stable and remain intact after the sampling process. (B) Average total token recovery rates (%), including both non-corrupt and corrupted tokens, for training and test sequences across varying corruption levels. Line colors indicate the fractions of the training dataset used (ranging from small to large ). As data scales, the model’s ability to flawlessly recover explicit training examples drops (indicating shrinking basins), while its recovery rate of unseen test examples improves (indicating expanding basins). The convergence of these rates at large dataset sizes (red curves) marks the sharp transition from memorization to generalization. Note: Deterministic (greedy) sampling was used across these experiments to isolate from stochastic noise.
Basins around training examples shrink and basins around test examples expand as the training dataset size increases . (A) Textual examples showing two Tiny UDDMs’ token recovery at noise level t = 0.…
№06
cs.AI
9

Tatemae: Detecting Alignment Faking via Tool Selection in LLMs

Matteo Leonesi, Francesco Belardinelli et al.

This paper introduces a novel method for detecting Alignment Faking (AF) in LLMs by observing strategic tool selection rather than relying solely on Chain-of-Thought analysis. The …

№07
cs.AI
9

TLPO: Token-Level Policy Optimization for Mitigating Language Confusion in Large Language Models

Jinho Choo, JunSeung Lee et al.

TLPO introduces Token-Level Policy Optimization, a novel fine-tuning framework to mitigate language confusion in LLMs by applying localized, token-level updates instead of sequence…

№08
cs.AI
9

Turning the TIDE: Cross-Architecture Distillation for Diffusion Large Language Models

Gongbo Zhang, Wen Wang et al.

This paper introduces TIDE, the first framework for cross-architecture knowledge distillation between diffusion large language models (dLLMs). TIDE employs three novel components—T…

№09
cs.CL
9

SafeReview: Defending LLM-based Review Systems Against Adversarial Hidden Prompts

Yuan Xin, Yixuan Weng et al.

The paper introduces **SafeReview**, a novel adversarial framework to defend LLM-based review systems against hidden adversarial prompts designed to manipulate review outcomes. It …

№10
cs.AI
8

Bian Que: An Agentic Framework with Flexible Skill Arrangement for Online System Operations

Bochao Liu, Zhipeng Qian et al.

Bian Que is an agentic framework designed to automate complex online system operations by addressing the orchestration bottleneck. Its core method involves unifying O&M tasks into …

§ III

The Town Square

Hacker News 10
compiled overnight by google/gemini-2.5-flash-lite-preview-09-2025 · end of issue no. 5 · thank you for reading