Kyle Lo

kyle_lo_profile.jpg

research

I’m a research scientist at the Allen Institute for AI on the OLMo and Semantic Scholar projects. I specialize in topics in natural language processing, machine learning and human-AI interaction:

me

I live in Seattle. When not working, I hang with my cat Belphegor and play board games (Robinson Crusoe, Cthulu: Death May Die, Hanabi) and video games (Baldur's Gate 3, Valheim, Slay the Spire, Noita, Vampire Survivors). I love D&D and just finished a four year campaign in Eberron. Now embarking on a West Marches campaign while trying out some other systems like Blades in the Dark. I'm a boba enthusiast and my favorites in Seattle are Xing Fu Tang, TP Tea and Sunright Tea Studio.

news

Oct 01, 2024 Excited that our Semantic Reader paper is published in Communications of the ACM! πŸ₯³ This paper synthesizes our five years of AI and HCI research (50 researchers, 12 institutions) aimed at understanding reading challenges faced by scholars and how AI-powered intelligent interfaces can help. Check out the paper here!
Sep 25, 2024 Molmo is out! Molmo is our family of open, late-fusion image πŸ‘€ + text πŸ’¬ language models trained using a really high-quality dataset of images + dense captions / task demonstrations! βœ… Read the paper here, βœ… play with the model here, βœ… download the weights here, and βœ… look forward to our dataset release soon!
Sep 03, 2024 OLMoE is out! Our first mixture of experts model in the OLMo family πŸŽ‰ OLMoE has only 1B active params but matches perf of larger dense models 🫨 and comes released with: βœ… weights βœ… data βœ… code βœ… ckpts βœ… logs βœ… detailed paper! Download the weights here and read the paper here!
Aug 14, 2024 So proud to see both our OLMo and Dolma papers win πŸ† Best Paper awards πŸ† at ACL 2024 πŸ‡ΉπŸ‡­
Jul 25, 2024 Excited to be speaking at Gen Law workshop at ICML 2024 in πŸ‡¦πŸ‡Ή! I’ll be sharing fun pretraining data curation stories from OLMo, and my slides have cats! 🐈
Jun 06, 2024
Jun 01, 2024 Welcome Summer 2024 interns! Excited to be working with Alex Wettig, Chaitanya Malaviya, Lucy Li, Rose Wang, and Vishakh Padmakumar!
May 16, 2024 Four papers accepted to ACL 2024! πŸŽ‰ Two papers on open language models: OLMo for models and Dolma for data. Two papers on evaluating long-text generation: InfolossQA for omissions in medical summaries and KIWI for long-form QA over science papers. See y’all in Thailand! πŸ‡ΉπŸ‡­
May 01, 2024 omg attending back to back conferences. ICLR 2024 in Vienna πŸ‡¦πŸ‡ΉπŸ₯ presenting Booookscore, evaluating discourse coherency in book-length summarization. CHI 2024 in Hawaii πŸ‡ΊπŸ‡ΈπŸ£ presenting two works on helping non-expert audiences understand research papers through AI: Paper Plain, an augmented reading interface over medical papers, and Know your Audience, a large-scale user study on benefits and pitfalls of plain language summarization.
Feb 01, 2024 Excited to release our first set of artifacts from the OLMo project πŸ₯³ Want models? Download our open-source weights at 1B and a pair of weights at 7B and 7B scale, trained on different hardware, on Huggingface. We also open-source all our training and inference code. Learn more from our paper. Want data? Download all 3T tokens on Huggingface. We also open-source all our dataset construction tools. Learn more from our paper.
Dec 12, 2023 Happy to be rounding out the year with a Best Paper Award πŸ† for the EMNLP 2023 System Demo track for PaperMage! Also presenting EMNLP 2023 Main Conference and Findings accepted papers on Decontextualizing for Scientific Document Snippets, Tip-of-the-Tongue Retrieval, and Evaluating Multidocument Summarization with Retrieved Documents. Excited to see all my co-authors in Singapore!
Jun 15, 2023 Welcome Summer 2023 interns! Excited to be working directly with Orion Weller, Hyunji Lee, Fangyuan Xu and Hang Jiang!
Apr 30, 2023 Having a pretty good April :) Best paper award at CHI 23 (CiteSee) and Outstanding paper award at EACL 23 (LongEval). Thanks and congrats to all my co-authors!
Jan 01, 2023 New year, new site!