Hugging Face ‧ Blog
订阅
1. Introducing the Open Arabic LLM Leaderboard
2. Hugging Face x LangChain : A new partner package in LangChain
3. PaliGemma – Google's Cutting-Edge Open Vision Language Model
4. License to Call: Introducing Transformers Agents 2.0
5. Building Cost-Efficient Enterprise RAG applications with Intel Gaudi 2 and Intel Xeon
6. Subscribe to Enterprise Hub with your AWS Account
7. Introducing the Open Leaderboard for Hebrew LLMs!
8. Bringing the Artificial Analysis LLM Performance Leaderboard to Hugging Face
9. Powerful ASR + diarization + speculative decoding with Hugging Face Inference Endpoints
10. Improving Prompt Consistency with Structured Generations
11. StarCoder2-Instruct: Fully Transparent and Permissive Self-Alignment for Code Generation
12. Introducing the Open Chain of Thought Leaderboard
13. Jack of All Trades, Master of Some, a Multi-Purpose Transformer Agent
14. The Open Medical-LLM Leaderboard: Benchmarking Large Language Models in Healthcare
15. Welcome Llama 3 - Meta's new open LLM
16. Introducing the LiveCodeBench Leaderboard - Holistic and Contamination-Free Evaluation of Code LLMs
17. Ryght’s Journey to Empower Healthcare and Life Sciences with Expert Support from Hugging Face
18. Running Privacy-Preserving Inference on Hugging Face Endpoints
19. AI Apps in a Flash with Gradio's Reload Mode
20. Introducing Idefics2: A Powerful 8B Vision-Language Model for the community
21. Vision Language Models Explained
22. Making thousands of open LLMs bloom in the Vertex AI Model Garden
23. CodeGemma - an official Google release for code LLMs
24. Public Policy at Hugging Face
25. Text2SQL using Hugging Face Dataset Viewer API and Motherduck DuckDB-NSQL-7B
26. Hugging Face partners with Wiz Research to Improve AI Security
27. Blazing Fast SetFit Inference with ???? Optimum Intel on Xeon
28. Bringing serverless GPU inference to Hugging Face users
29. Pollen-Vision: Unified interface for Zero-Shot vision models in robotics
30. Binary and Scalar Embedding Quantization for Significantly Faster & Cheaper Retrieval
31. Total noob’s intro to Hugging Face Transformers
32. Introducing the Chatbot Guardrails Arena
33. GaLore: Advancing Large Model Training on Consumer-grade Hardware
34. Cosmopedia: how to create large-scale synthetic data for pre-training Large Language Models
35. A Chatbot on your Laptop: Phi-2 on Intel Meteor Lake
36. quanto: a pytorch quantization toolkit
37. Easily Train Models with H100 GPUs on NVIDIA DGX Cloud
38. Unlocking the conversion of Web Screenshots into HTML Code with the WebSight Dataset
39. CPU Optimized Embeddings with ???? Optimum Intel and fastRAG
40. Introducing ConTextual: How well can your Multimodal model jointly reason over text and image in text-rich scenes?
41. Data is better together
42. Text-Generation Pipeline on Intel® Gaudi® 2 AI Accelerator
43. StarCoder2 and The Stack v2
44. TTS Arena: Benchmarking Text-to-Speech Models in the Wild
45. AI Watermarking 101: Tools and Techniques
46. ???? Introduction to Matryoshka Embedding Models
47. Introducing the Red-Teaming Resistance Leaderboard
48. Fine-Tuning Gemma Models in Hugging Face
49. Welcome Gemma - Google's new open LLM
50. Introducing the Open Ko-LLM Leaderboard: Leading the Korean LLM Evaluation Ecosystem
51. ???? PEFT welcomes new merging methods
52. Synthetic data: save money, time and carbon with open source
53. AMD Pervasive AI Developer Contest!
54. From OpenAI to Open LLMs with Messages API
55. SegMoE: Segmind Mixture of Diffusion Experts
56. NPHardEval Leaderboard: Unveiling the Reasoning Abilities of Large Language Models through Complexity Classes and Dynamic Updates
57. Hugging Face Text Generation Inference available for AWS Inferentia2
58. Constitutional AI with Open LLMs
59. Patch Time Series Transformer in Hugging Face
60. Introducing the Enterprise Scenarios Leaderboard: a Leaderboard for Real World Use Cases
61. Accelerate StarCoder with ???? Optimum Intel on Xeon: Q8/Q4 and Speculative Decoding
62. The Hallucinations Leaderboard, an Open Effort to Measure Hallucinations in Large Language Models
63. An Introduction to AI Secure LLM Safety Leaderboard
64. Hugging Face and Google partner for open AI collaboration
65. Open-source LLMs as LangChain Agents
66. PatchTSMixer in HuggingFace
67. Fine-Tune W2V2-Bert for low-resource ASR with ???? Transformers
68. Preference Tuning LLMs with Direct Preference Optimization Methods
69. Accelerating SD Turbo and SDXL Turbo Inference with ONNX Runtime and Olive
70. A guide to setting up your own Hugging Face leaderboard: an end-to-end example with Vectara's hallucination leaderboard
71. Faster fine-tuning using TRL & Unsloth
72. Welcome aMUSEd: Efficient Text-to-Image Generation
73. LoRA training scripts of the world, unite!
74. Speculative Decoding for 2x Faster Whisper Inference
75. 2023, year of open LLMs
76. Mixture of Experts Explained
77. Welcome Mixtral - a SOTA Mixture of Experts on Hugging Face
78. SetFitABSA: Few-Shot Aspect Based Sentiment Analysis using SetFit
79. Goodbye cold boot - how we made LoRA inference 300% faster
80. Optimum-NVIDIA - Unlock blazingly fast LLM inference in just 1 line of code
81. AMD + ????: Large Language Models Out-of-the-Box Acceleration with AMD GPU
82. Open LLM Leaderboard: DROP deep dive
83. SDXL in 4 steps with Latent Consistency LoRAs
84. Comparing the Performance of LLMs: A Deep Dive into Roberta, Llama 2, and Mistral for Disaster Tweets Analysis with Lora
85. Introducing Prodigy-HF: a direct integration with Hugging Face
86. Make your llama generation time fly with AWS Inferentia2
87. Introducing Storage Regions on the HF Hub
88. Creating open machine learning datasets? Share them on the Hugging Face Hub!
89. Personal Copilot: Train Your Own Coding Assistant
90. Interactively explore your Huggingface dataset with one line of code
91. Deploy Embedding Models with Hugging Face Inference Endpoints
92. The N Implementation Details of RLHF with PPO
93. Exploring simple optimizations for SDXL
94. Gradio-Lite: Serverless Gradio Running Entirely in Your Browser
95. Accelerating over 130,000 Hugging Face models with ONNX Runtime
96. Accelerating Stable Diffusion XL Inference with JAX on Cloud TPU v5e
97. Chat Templates: An End to the Silent Performance Killer
98. Deploying the AI Comic Factory using the Inference API
99. Finetune Stable Diffusion Models with DDPO via TRL
100. Ethics and Society Newsletter #5: Hugging Face Goes To Washington and Other Summer 2023 Musings
101. Non-engineers guide: Train a LLaMA 2 chatbot
102. Llama 2 on Amazon SageMaker a Benchmark
103. Inference for PROs
104. Rocket Money x Hugging Face: Scaling Volatile ML Models in Production
105. Object Detection Leaderboard
106. Introduction to 3D Gaussian Splatting
107. Optimizing your LLM in production
108. Introducing Würstchen: Fast Diffusion for Image Generation
109. Fine-tuning Llama 2 70B using PyTorch FSDP
110. Overview of natively supported quantization schemes in ???? Transformers
111. SafeCoder vs. Closed-source Code Assistants
112. Efficient Controllable Generation for SDXL with T2I-Adapters
113. Spread Your Wings: Falcon 180B is here
114. Fetch Cuts ML Processing Latency by 50% Using Amazon SageMaker & Hugging Face
115. AudioLDM 2, but faster ⚡️
116. Code Llama: Llama 2 learns to code
117. Deprecation of Git Authentication using password
118. Making LLMs lighter with AutoGPTQ and transformers
119. Introducing SafeCoder
120. Introducing IDEFICS: An Open Reproduction of State-of-the-art Visual Language Model
121. Hugging Face Platform on the AWS Marketplace: Pay with your AWS Account
122. Deploying Hugging Face Models with BentoML: DeepFloyd IF in Action
123. Optimizing Bark using ???? Transformers
124. Releasing Swift Transformers: Run On-Device LLMs in Apple Devices
125. Fine-tune Llama 2 with DPO
126. Deploy MusicGen in no time with Inference Endpoints
127. Towards Encrypted Large Language Models with FHE
128. Huggy Lingo: Using Machine Learning to Improve Language Metadata on the Hugging Face Hub
129. Practical 3D Asset Generation: A Step-by-Step Guide
130. Open-sourcing Knowledge Distillation Code and Weights of SD-Small and SD-Tiny
131. Stable Diffusion XL on Mac with Advanced Core ML Quantization
132. Introducing Agents.js: Give tools to your LLMs using JavaScript
133. AI Policy @????: Open ML Considerations in the EU AI Act
134. Results of the Open Source AI Game Jam
135. Happy 1st anniversary ???? Diffusers!
136. Llama 2 is here - get it on Hugging Face
137. Open-Source Text Generation & LLM Ecosystem at Hugging Face
138. Building an AI WebTV
139. Fine-tuning Stable Diffusion models on Intel CPUs
140. Making ML-powered web games with Transformers.js
141. Deploy LLMs with Hugging Face Inference Endpoints
142. Making a web app generator with open ML models
143. Leveraging Hugging Face for complex generative AI use cases
144. Accelerating Vision-Language Models: BridgeTower on Habana Gaudi2
145. Ethics and Society Newsletter #4: Bias in Text-to-Image Models
146. What's going on with the Open LLM Leaderboard?
147. Panel on Hugging Face
148. AI Policy @????: Response to the U.S. NTIA's Request for Comment on AI Accountability
149. Fine-tuning MMS Adapter Models for Multi-Lingual ASR
150. Yes, Transformers are Effective for Time Series Forecasting (+ Autoformer)
151. Deploy Livebook notebooks as apps to Hugging Face Spaces
152. Faster Stable Diffusion with Core ML on iPhone, iPad, and Mac
153. Announcing our new Content Guidelines and Policy
154. Hugging Face and AMD partner on accelerating state-of-the-art models for CPU and GPU platforms
155. The Hugging Face Hub for Galleries, Libraries, Archives and Museums
156. Can foundation models label data like humans?
157. DuckDB: run SQL queries on 50,000+ datasets on the Hugging Face Hub
158. Welcome fastText to the ???? Hub
159. The Falcon has landed in the Hugging Face ecosystem
160. AI Speech Recognition in Unity
161. Announcing the Open Source AI Game Jam ????
162. Introducing BERTopic Integration with Hugging Face Hub
163. Introducing the Hugging Face LLM Inference Container for Amazon SageMaker
164. Optimizing Stable Diffusion for Intel CPUs with NNCF and ???? Optimum
165. Hugging Face Collaborates with Microsoft to Launch Hugging Face Model Catalog on Azure
166. Making LLMs even more accessible with bitsandbytes, 4-bit quantization and QLoRA
167. Hugging Face and IBM partner on watsonx.ai, the next-generation enterprise studio for AI builders
168. Instruction-tuning Stable Diffusion with InstructPix2Pix
169. Safetensors audited as really safe and becoming the default
170. Smaller is better: Q8-Chat, an efficient generative AI experience on Xeon
171. Large-scale Near-deduplication Behind BigCode
172. Introducing RWKV — An RNN with the advantages of a transformer
173. Run a Chatgpt-like Chatbot on a Single GPU with ROCm
174. Hugging Face Selected for the French Data Protection Agency Enhanced Support Program
175. Assisted Generation: a new direction toward low-latency text generation
176. Creating a Coding Assistant with StarCoder
177. A Dive into Text-to-Video Models
178. StarCoder: A State-of-the-Art LLM for Code
179. How to Install and Use the Hugging Face Unity API
180. Training a language model with ???? Transformers using TensorFlow and TPUs
181. Databricks ❤️ Hugging Face: up to 40% faster training and tuning of Large Language Models
182. Running IF with ???? diffusers on a Free Tier Google Colab
183. Introducing HuggingFace blog for Chinese speakers: Fostering Collaboration with the Chinese AI community
184. How to host a Unity game in a Space
185. Accelerating Hugging Face Transformers with AWS Inferentia2
186. Graph Classification with Transformers
187. Creating Privacy Preserving AI with Substra
188. Snorkel AI x Hugging Face: unlock foundation models for enterprises
189. StackLLaMA: A hands-on guide to train LLaMA with RLHF
190. Ethics and Society Newsletter #3: Ethical Openness at Hugging Face
191. Fast Inference on Large Language Models: BLOOMZ on Habana Gaudi2 Accelerator
192. Accelerating Stable Diffusion Inference on Intel CPUs
193. Federated Learning using Hugging Face and Flower
194. Train your ControlNet with diffusers
195. Jupyter X Hugging Face
196. Multivariate Probabilistic Time Series Forecasting with Informer
197. Fine-tuning 20B LLMs with RLHF on a 24GB consumer GPU
198. New ViT and ALIGN Models From Kakao Brain
199. ControlNet in Diffusers ????
200. Using Machine Learning to Aid Survivors and Race through Time
201. Ethical guidelines for developing the Diffusers library
202. How Hugging Face Accelerated Development of Witty Works Writing Assistant
203. Red-Teaming Large Language Models
204. Swift Diffusers: Fast Stable Diffusion for Mac
205. Fetch Consolidates AI Tools and Saves 30% Development Time with Hugging Face on AWS
206. Hugging Face and AWS partner to make AI more accessible
207. Why we’re switching to Hugging Face Inference Endpoints, and maybe you should too
208. Zero-shot image-to-text generation with BLIP-2
209. ???? PEFT: Parameter-Efficient Fine-Tuning of Billion-Scale Models on Low-Resource Hardware
210. Speech Synthesis, Recognition, and More With SpeechT5
211. Introducing ⚔️ AI vs. AI ⚔️ a deep reinforcement learning multi-agents competition system
212. Generating Stories: AI for Game Development #5
213. Accelerating PyTorch Transformers with Intel Sapphire Rapids, part 2
214. A Dive into Pretraining Strategies for Vision-Language Models
215. The State of Computer Vision at Hugging Face ????
216. Using LoRA for Efficient Stable Diffusion Fine-Tuning
217. 2D Asset Generation: AI for Game Development #4
218. Optimum+ONNX Runtime - Easier, Faster training for your Hugging Face models
219. What Makes a Dialog Agent Useful?
220. 3D Asset Generation: AI for Game Development #3
221. Universal Image Segmentation with Mask2Former and OneFormer
222. Welcome PaddlePaddle to the Hugging Face Hub
223. Image Similarity with Hugging Face Datasets and Transformers
224. AI for Game Development: Creating a Farming Game in 5 Days. Part 2
225. Introduction to Graph Machine Learning
226. Accelerating PyTorch Transformers with Intel Sapphire Rapids, part 1
227. AI for Game Development: Creating a Farming Game in 5 Days. Part 1
228. Zero-shot image segmentation with CLIPSeg
229. Model Cards: Introducing HF Model documentation tools
230. A Complete Guide to Audio Datasets
231. Ethics and Society Newsletter #2: Let's talk about bias!
232. Faster Training and Inference: Habana Gaudi®2 vs Nvidia A100 80GB
233. From GPT2 to Stable Diffusion: Hugging Face arrives to the Elixir community
234. Illustrating Reinforcement Learning from Human Feedback (RLHF)
235. Deep Learning with Proteins
236. Probabilistic Time Series Forecasting with ???? Transformers
237. Using Stable Diffusion with Core ML on Apple Silicon
238. VQ Diffusion with ???? Diffusers
239. We are hiring interns!
240. Diffusion Models Live Event
241. Director of Machine Learning Insights [Part 4]
242. An Overview of Inference Solutions on Hugging Face
243. Accelerating Document AI
244. Hugging Face Machine Learning Demos on arXiv
245. Sentiment Classification with Fully Homomorphic Encryption using Concrete ML
246. Introducing our new pricing
247. Generating Human-level Text with Contrastive Search in Transformers ????
248. Training Stable Diffusion with Dreambooth using ???? Diffusers
249. Fine-Tune Whisper with ???? Transformers
250. Accelerate your models with ???? Optimum Intel and OpenVINO
251. Evaluating Language Model Bias with ???? Evaluate
252. From PyTorch DDP to ???? Accelerate to ???? Trainer, mastery of distributed training with ease
253. MTEB: Massive Text Embedding Benchmark
254. Getting started with Hugging Face Inference Endpoints
255. Stable Diffusion in JAX/Flax ????
256. Optimization story: Bloom inference
257. Introducing DOI: the Digital Object Identifier to Datasets and Models
258. Japanese Stable Diffusion
259. Very Large Language Models and How to Evaluate Them
260. Image Classification with AutoTrain
261. How ???? Accelerate runs very large models thanks to PyTorch
262. SetFit: Efficient Few-Shot Learning Without Prompts
263. Ethics and Society Newsletter #1
264. Incredibly Fast BLOOM Inference with DeepSpeed and Accelerate
265. What's new in Diffusers? ????
266. Train your first Decision Transformer
267. How to train a Language Model with Megatron-LM
268. OpenRAIL: Towards open and responsible AI licensing frameworks
269. Visualize proteins on Hugging Face Spaces
270. Pre-Train BERT with Hugging Face Transformers and Habana Gaudi
271. Stable Diffusion with ???? Diffusers
272. Deploying ???? ViT on Vertex AI
273. Deep Dive: Vision Transformers On Hugging Face Optimum Graphcore
274. A Gentle Introduction to 8-bit Matrix Multiplication for transformers at scale using transformers, accelerate and bitsandbytes
275. Hugging Face's TensorFlow Philosophy
276. Introducing Skops
277. Deploying ???? ViT on Kubernetes with TF Serving
278. Train and Fine-Tune Sentence Transformers Models
279. Proximal Policy Optimization (PPO)
280. Introducing the Private Hub: A New Way to Build With Machine Learning
281. Nyströmformer, Approximating self-attention in linear time and memory via the Nyström method
282. AI Policy @????: Comments on U.S. National AI Research Resource Interim Report
283. Introducing new audio and vision documentation in ???? Datasets
284. Faster Text Generation with TensorFlow and XLA
285. Deploying TensorFlow Vision Models in Hugging Face with TF Serving
286. Advantage Actor Critic (A2C)
287. How to train your model dynamically using adversarial data
288. The Technology Behind BLOOM Training
289. Building a Playlist Generator with Sentence Transformers
290. Introducing The World's Largest Open Multilingual Language Model: BLOOM
291. Getting Started with Sentiment Analysis on Twitter
292. Policy Gradient with PyTorch
293. Liftoff! How to get started with your first ML project ????
294. Announcing Evaluation on the Hub
295. Accelerate Large Model Training using DeepSpeed
296. Getting Started With Embeddings
297. Convert Transformers to ONNX with Hugging Face Optimum
298. Intel and Hugging Face Partner to Democratize Machine Learning Hardware Acceleration
299. Director of Machine Learning Insights [Part 3: Finance Edition]
300. Deep Q-Learning with Atari
301. The Annotated Diffusion Model
302. Graphcore and Hugging Face Launch New Lineup of IPU-Ready Transformers
303. Introducing Pull Requests and Discussions ????
304. Efficient Table Pre-training without Real Data: An Introduction to TAPEX
305. An Introduction to Q-Learning Part 2
306. Putting ethical principles at the core of research lifecycle
307. How Sempre Health is leveraging the Expert Acceleration Program to accelerate their ML roadmap
308. An Introduction to Q-Learning Part 1
309. Announcing the Hugging Face Fellowship Program
310. Machine Learning Experts - Sasha Luccioni Interview
311. Gradio 3.0 is Out!
312. Student Ambassador Program's call for applications is open!
313. Director of Machine Learning Insights [Part 2: SaaS Edition]
314. Accelerated Inference with Optimum and Transformers Pipelines
315. We Raised $100 Million for Open & Collaborative Machine Learning ????
316. Welcome fastai to the Hugging Face Hub
317. An Introduction to Deep Reinforcement Learning
318. Accelerate Large Model Training using PyTorch Fully Sharded Data Parallel
319. Opinion Classification with Kili and HuggingFace AutoTrain
320. Director of Machine Learning Insights [Series]
321. Getting Started with Transformers on Habana Gaudi
322. Introducing Hugging Face for Education
323. Supercharged Customer Service with Machine Learning
324. CO2 Emissions and the ???? Hub: Leading the Charge
325. Machine Learning Experts - Lewis Tunstall Interview
326. Habana Labs and Hugging Face Partner to Accelerate Transformer Model Training
327. Don't repeat yourself - ???? Transformers Design Philosophy
328. Introducing Decision Transformers on Hugging Face ????
329. Machine Learning Experts - Meg Mitchell Interview
330. Announcing the ???? AI Research Residency Program
331. Fine-Tune a Semantic Segmentation Model with a Custom Dataset
332. Image search with ???? datasets
333. Accelerate BERT inference with Hugging Face Transformers and AWS inferentia
334. Guiding Text Generation with Constrained Beam Search in ???? Transformers
335. BERT 101 ???? State Of The Art NLP Model Explained
336. Fine-Tune ViT for Image Classification with ???? Transformers
337. Getting Started with Sentiment Analysis using Python
338. Making automatic speech recognition work on large files with Wav2Vec2 in ???? Transformers
339. Supercharged Searching on the Hugging Face Hub
340. Welcome Stable-baselines3 to the Hugging Face Hub ????
341. Case Study: Millisecond Latency using Hugging Face Infinity and modern CPUs
342. Boost Wav2Vec2 with n-gram LM in ???? Transformers
343. Deploy GPT-J 6B for inference using Hugging Face Transformers and Amazon SageMaker
344. Active Learning with AutoNLP and Prodigy
345. Gradio joins Hugging Face!
346. Perceiver IO: a scalable, fully-attentional model that works on any modality
347. Training CodeParrot ???? from Scratch
348. Introducing Snowball Fight ☃️, our First ML-Agents Environment
349. Getting Started with Hugging Face Transformers for IPUs with Optimum
350. Introducing the Data Measurements Tool: an Interactive Tool for Looking at Datasets
351. Accelerating PyTorch distributed fine-tuning with Intel technologies
352. Fine-tuning XLS-R for Multi-Lingual ASR with ???? Transformers
353. Scaling up BERT-like model Inference on modern CPU - Part 2
354. Large Language Models: A New Moore's Law?
355. Course Launch Community Event
356. Train a Sentence Embedding Model with 1B Training Pairs
357. The Age of Machine Learning As Code Has Arrived
358. Fine tuning CLIP with Remote Sensing (Satellite) images and captions
359. Showcase Your Projects in Spaces using Gradio
360. Hosting your Models and Datasets on Hugging Face Spaces using Streamlit
361. Summer at Hugging Face ☀️
362. Introducing Optimum: The Optimization Toolkit for Transformers at Scale
363. Hugging Face and Graphcore partner for IPU-optimized Transformers
364. Deep Learning over the Internet: Training Language Models Collaboratively
365. Welcome spaCy to the ???? Hub
366. Deploy Hugging Face models easily with Amazon SageMaker
367. Sentence Transformers in the ???? Hub
368. Few-shot learning in practice: GPT-NEO and the ???? Accelerated Inference API
369. Using & Mixing Hugging Face Models with Gradio 2.0
370. Scaling-up BERT Inference on CPU (Part 1)
371. Introducing ???? Accelerate
372. Distributed Training: Train BART/T5 for Summarization using ???? Transformers and Amazon SageMaker
373. Understanding BigBird's Block Sparse Attention
374. The Partnership: Amazon SageMaker and Hugging Face
375. My Journey to a serverless transformers pipeline on Google Cloud
376. Fine-Tune Wav2Vec2 for English ASR with ???? Transformers
377. Hugging Face Reads, Feb. 2021 - Long-range Transformers
378. Simple considerations for simple people building fancy neural networks
379. Retrieval Augmented Generation with Huggingface Transformers and Ray
380. Hugging Face on PyTorch / XLA TPUs
381. Faster TensorFlow models in Hugging Face Transformers
382. Fit More and Train Faster With ZeRO via DeepSpeed and FairScale
383. How we sped up transformer inference 100x for ???? API customers
384. Leveraging Pre-trained Language Model Checkpoints for Encoder-Decoder Models
385. Porting fairseq wmt19 translation system to transformers
386. Hyperparameter Search with Transformers and Ray Tune
387. Transformer-based Encoder-Decoder Models
388. Block Sparse Matrices for Smaller and Faster Language Models
389. The Reformer - Pushing the limits of language modeling
390. How to generate text: using different decoding methods for language generation with Transformers
391. How to train a new language model from scratch using Transformers and Tokenizers
更新于 4 分钟前

近期历史最近 100 条记录

2024-05-15 PaliGemma – Google's Cutting-Edge Open Vision Language Model
2024-05-15 Hugging Face x LangChain : A new partner package in LangChain
2024-05-14 Introducing the Open Arabic LLM Leaderboard ubutler
2024-05-13 License to Call: Introducing Transformers Agents 2.0 aubanel
2024-05-10 Subscribe to Enterprise Hub with your AWS Account
2024-05-09 Building Cost-Efficient Enterprise RAG applications with Intel Gaudi 2 and Intel Xeon ramyaravi19
2024-05-05 Introducing the Open Leaderboard for Hebrew LLMs!
2024-05-03 Bringing the Artificial Analysis LLM Performance Leaderboard to Hugging Face
2024-05-01 Powerful ASR + diarization + speculative decoding with Hugging Face Inference Endpoints
2024-04-30 Improving Prompt Consistency with Structured Generations
2024-04-30 StarCoder2-Instruct: Fully Transparent and Permissive Self-Alignment for Code Generation tosh
2024-04-25 Fetch Consolidates AI Tools and Saves 30% Development Time with Hugging Face on AWS
2024-04-23 Introducing the Open Chain of Thought Leaderboard srirangr
2024-04-22 Jack of All Trades, Master of Some, a Multi-Purpose Transformer Agent
2024-04-19 Welcome Llama 3 - Meta's new open LLM
2024-04-18 The Open Medical-LLM Leaderboard: Benchmarking Large Language Models in Healthcare
2024-04-17 Introducing the LiveCodeBench Leaderboard - Holistic and Contamination-Free Evaluation of Code LLMs
2024-04-17 AI Apps in a Flash with Gradio's Reload Mode
2024-04-16 Running Privacy-Preserving Inference on Hugging Face Endpoints
2024-04-16 Ryght’s Journey to Empower Healthcare and Life Sciences with Expert Support from Hugging Face
2024-04-16 Introducing Idefics2: A Powerful 8B Vision-Language Model for the community theschwa
2024-04-12 Vision Language Models Explained
2024-04-10 Making thousands of open LLMs bloom in the Vertex AI Model Garden
2024-04-09 CodeGemma - an official Google release for code LLMs homarp
2024-04-09 Public Policy at Hugging Face
2024-04-05 Hugging Face partners with Wiz Research to Improve AI Security
2024-04-05 Hugging Face partners with Wiz Research to Improve AI Security
2024-04-04 Text2SQL using Hugging Face Dataset Viewer API and Motherduck DuckDB-NSQL-7B
2024-04-03 Blazing Fast SetFit Inference with ???? Optimum Intel on Xeon
2024-04-02 Bringing serverless GPU inference to Hugging Face users gregorymichael
2024-03-25 Pollen-Vision: Unified interface for Zero-Shot vision models in robotics

匿名用户只展示最新 100 条榜单历史,更多历史数据请登录后查看,支持时光机按天筛选