Futures
Access hundreds of perpetual contracts
TradFi
Gold
One platform for global traditional assets
Options
Hot
Trade European-style vanilla options
Unified Account
Maximize your capital efficiency
Demo Trading
Introduction to Futures Trading
Learn the basics of futures trading
Futures Events
Join events to earn rewards
Demo Trading
Use virtual funds to practice risk-free trading
Launch
CandyDrop
Collect candies to earn airdrops
Launchpool
Quick staking, earn potential new tokens
HODLer Airdrop
Hold GT and get massive airdrops for free
Pre-IPOs
Unlock full access to global stock IPOs
Alpha Points
Trade on-chain assets and earn airdrops
Futures Points
Earn futures points and claim airdrop rewards
I noticed an interesting development in speech recognition. Sierra has released μ-Bench—a multilingual dataset for evaluating ASR systems—to the public, and it looks like a pretty significant step.
The gist is: the dataset includes 250 real recordings from customer service and 4,270 annotated audio clips. The main difference from existing benchmarks is that it’s not only in English. It supports five languages—English, Spanish, Turkish, Vietnamese, and Mandarin.
The especially intriguing new metric is UER (Utterance Error Rate). It distinguishes errors that change the meaning of the statement from those that don’t. This is much more nuanced than the traditional WER metric, where all errors are considered equal.
Based on testing results: Google Chirp-3 leads in accuracy, Deepgram Nova-3 is the fastest, but lags in multilingual performance. It will be interesting to see how this develops further.
The dataset and the results table are already available on Hugging Face, so other developers can join the evaluation. It seems μ-Bench is becoming the new standard for serious ASR evaluation in customer service environments.