‹ Reports
The Dispatch

OSS Report: karpathy/llm.c


CUDA Compatibility Issues Persist in karpathy/llm.c Project, Hindering Multi-GPU Performance

The karpathy/llm.c project, aimed at training large language models using C/CUDA, is experiencing ongoing challenges with CUDA errors and multi-GPU support, as evidenced by numerous user-reported issues. This project is spearheaded by Andrej Karpathy and focuses on providing a lightweight alternative to larger frameworks for LLM training.

Recent activity highlights significant community engagement, with 71 open issues reflecting a mix of feature requests, bug reports, and discussions. Notably, CUDA-related problems such as memory management errors are prevalent, indicating persistent difficulties in optimizing the codebase for diverse hardware configurations. There is also a strong emphasis on educational content, with users seeking better documentation and guidance.

Recent Activity

Recent issues and pull requests (PRs) indicate a focus on resolving CUDA compatibility and enhancing model features. Issues like #747 (FP16 training on Turing GPUs) and #727 (MPI run with 8 GPUs failing) underscore ongoing multi-GPU challenges. Meanwhile, PRs such as #757 (RMSNorm implementation) and #756 (RoPE positional encoding) suggest efforts to integrate advanced techniques for improved model performance.

Development Team and Activities

Of Note

Quantified Reports

Quantify Issues



Recent GitHub Issues Activity

Timespan Opened Closed Comments Labeled Milestones
7 Days 0 0 0 0 0
30 Days 3 1 5 3 1
90 Days 17 5 25 17 1
All Time 132 61 - - -

Like all software activity quantification, these numbers are imperfect but sometimes useful. Comments, Labels, and Milestones refer to those issues opened in the timespan in question.

Quantify commits



Quantified Commit Activity Over 30 Days

Developer Avatar Branches PRs Commits Files Changes
Andrej 2 1/0/1 9 7 2893
NEWPLAN (NEWPLAN) 0 1/0/0 0 0 0
Biao Zhang (zhangpiu) 0 0/1/0 0 0 0
Yusong Gao (GaoYusong) 0 0/1/0 0 0 0
Jake (Jake-Song) 0 1/0/0 0 0 0
Jiahao Tan (KarhouTam) 0 0/0/1 0 0 0
Gabriel Castro (saladpalad) 0 1/0/0 0 0 0
Aleksa Gordić (gordicaleksa) 0 3/1/0 0 0 0
almao (invisiblepancake) 0 0/0/1 0 0 0
Gajanan Choudhary (gajanan-choudhary) 0 1/0/0 0 0 0

PRs: created by that dev and opened/merged/closed-unmerged during the period

Detailed Reports

Report On: Fetch issues



Recent Activity Analysis

The karpathy/llm.c repository currently has 71 open issues, indicating ongoing engagement from the community. Recent activity shows a mix of feature requests, bug reports, and discussions about implementation details. Notably, issues related to CUDA errors and multi-GPU support are prevalent, suggesting challenges in optimizing performance across different hardware configurations. There is also a significant focus on educational aspects, with users seeking clarification on various components of the codebase.

Several issues exhibit common themes, particularly around memory management and CUDA compatibility. For instance, multiple users have reported "out of memory" errors or illegal memory access during training, which could indicate underlying problems with resource allocation or kernel execution in the CUDA implementation. Additionally, there are discussions about improving documentation and providing clearer guidelines for new users.

Issue Details

Most Recently Created Issues

  1. Issue #752: llm.c for inference

    • Priority: Low
    • Status: Open
    • Created: 16 days ago
    • Updated: 11 days ago
  2. Issue #747: Can't train in FP16 on Turing

    • Priority: Medium
    • Status: Open
    • Created: 24 days ago
    • Updated: 23 days ago
  3. Issue #739: Suggestion: Test more Activation Functions

    • Priority: Low
    • Status: Open
    • Created: 36 days ago
    • Updated: 22 days ago
  4. Issue #729: MPI run error

    • Priority: High
    • Status: Open
    • Created: 40 days ago
    • Updated: Not updated
  5. Issue #727: MPI run with 8 GPU fails

    • Priority: High
    • Status: Open
    • Created: 46 days ago
    • Updated: Not updated

Most Recently Updated Issues

  1. Issue #752: llm.c for inference

    • Updated recently with a relevant comment linking to another project.
  2. Issue #747: Can't train in FP16 on Turing

    • User provided updates on their workaround but continues to face issues.
  3. Issue #739: Suggestion: Test more Activation Functions

    • Discussion continues with additional suggestions from community members.
  4. Issue #727: MPI run with 8 GPU fails

    • No updates since creation; issue remains unresolved.
  5. Issue #723: TypeError: normal_() got an unexpected keyword argument 'generator'

    • Ongoing discussion about resolving a Python-related error.

Themes and Commonalities

  • A significant number of issues revolve around CUDA-related errors, particularly concerning memory management and compatibility with different GPU architectures.
  • Users are actively engaging in discussions about potential improvements to the codebase, including feature requests for additional functionalities such as support for more activation functions and better handling of larger models.
  • There is a clear need for improved documentation and guidance for new users, as many issues stem from misunderstandings about how to properly utilize the library or troubleshoot common problems.

Overall, the activity within the karpathy/llm.c repository reflects a vibrant community eager to contribute to and learn from this innovative project focused on LLM training using C/CUDA.

Report On: Fetch pull requests



Report on Pull Requests

Overview

The repository karpathy/llm.c has a total of 103 open pull requests (PRs), with a significant focus on enhancing performance, adding new features, and improving the overall architecture of the project. The recent PRs predominantly revolve around the implementation of advanced techniques such as RMSNorm, RoPE positional encoding, and SwiGLU activation functions, along with various optimizations for existing kernels.

Summary of Pull Requests

  1. PR #757: RMSNorm - WIP
    Created by Aleksa Gordić, this work-in-progress PR aims to add RMSNorm support. It includes several commits related to kernel allocation and refactoring. The addition is significant as RMSNorm is a modern normalization technique that may improve model training stability.

  2. PR #756: Add RoPE positional encoding - llama3 feature branch
    Also by Aleksa Gordić, this PR implements rotary position embedding (RoPE) from the RoFormer paper. The author provides experimental results showing improved validation loss with RoPE compared to traditional embeddings.

  3. PR #755: Add SwiGLU support - llama3 feature branch
    This PR introduces the SwiGLU activation function based on a recent paper. The author notes an increase in memory footprint but suggests potential performance benefits. However, they express uncertainty about its advantages over existing activation functions.

  4. PR #754: add llama 3 support to llm.c
    Initiated by Andrej Karpathy, this draft PR starts integrating Llama 3 support into the codebase, indicating ongoing efforts to enhance compatibility with newer models.

  5. PR #750: implement rmsnorm in C
    Created by Jake Song, this PR implements RMSNorm in C for Llama 3. The author expresses some uncertainty regarding its accuracy but notes successful tests against PyTorch.

  6. PR #718: Add SwiGLU support
    This earlier PR by Aleksa Gordić also introduces SwiGLU but seems to have undergone further refinement in the latest iteration (#755).

  7. PR #753: Adamw thread coarsening kernel
    Gabriel Castro's PR focuses on optimizing the AdamW optimizer through thread coarsening techniques, which could enhance performance during training.

  8. PR #748: Fix sizing typo in train_gpt2_fp32.cu
    A simple typo fix by Gajanan Choudhary that corrects a variable name in the codebase.

  9. PR #746: log with LINE and FILE for better addressing
    A minor improvement by NEWPLAN that enhances logging clarity by including line numbers and file names.

  10. PR #743: Fixed modal script for updated cudnn version, and read errors
    Vyom Sharma's PR addresses issues with reading errors caused by recent changes in cuDNN versions.

Analysis of Pull Requests

The current set of open pull requests reflects a strong emphasis on enhancing model performance and stability through advanced techniques and optimizations. Notably, several PRs are focused on integrating new normalization methods (like RMSNorm) and activation functions (such as SwiGLU), which are critical for improving training dynamics in large language models (LLMs).

Common Themes

  • Normalization Techniques: The introduction of RMSNorm (#757) and RoPE (#756) indicates a trend towards adopting more sophisticated normalization methods that can potentially lead to better convergence rates and model performance.
  • Activation Functions: The addition of SwiGLU (#755) highlights ongoing experimentation with different activation functions to optimize neural network performance.
  • Performance Optimizations: Multiple PRs aim at optimizing existing kernels (e.g., AdamW optimizations in #753), showcasing a commitment to improving computational efficiency.
  • Integration of New Models: The integration efforts for Llama 3 (#754) signify an adaptive approach to evolving architectures within the community.

Anomalies

  • Lack of Recent Merges: While there are numerous active PRs, there seems to be a lack of recent merges into the main branch, which could indicate bottlenecks in review processes or potential disagreements on implementation details.
  • Old PRs Still Open: Some older PRs remain open without significant updates or merges, suggesting possible stagnation or prioritization issues within the development workflow.

Disputes and Discussions

There are instances where contributors have engaged in discussions about code quality and design decisions (e.g., comments from Andrej Karpathy regarding the use of AutoTokenizer). This reflects an active community dynamic but also points to potential friction regarding codebase direction and standards.

Conclusion

The pull requests currently open in karpathy/llm.c demonstrate a vibrant development environment focused on advancing LLM capabilities through innovative techniques and optimizations. However, attention should be given to streamlining the merging process and addressing any outstanding discussions or disputes to maintain momentum in project development.

Report On: Fetch commits



Repo Commits Analysis

Development Team and Recent Activity

Team Members and Activities

Andrej Karpathy (karpathy)

  • Recent Commits: 9 commits in the last 30 days.
  • Recent Activity:
    • 1 day ago: Introduced rmsnorm, unfused, forward functionality in llmc/layernorm.cuh and modified related files.
    • 1 day ago: Added a new Encoder that does not use positional embeddings, ensuring activations match after encoding.
    • 1 day ago: Implemented support for dataloader to serve uint32_t tokens, necessary for Llama 3.
    • 1 day ago: Made llama3cu phony in the Makefile.
    • 1 day ago: Adapted parameter tensor sizes to load Llama 3 weights from file.
    • 3 days ago: Changed export code of Llama 3 to be compatible with GPT-2.
    • 4 days ago: Began aligning Python and C versions with default hyperparameters and header reading.

Aleksa Gordić (gordicaleksa)

  • Recent Commits: No recent commits; however, has multiple merged PRs in the past indicating active contributions.

Other Team Members

  • The following members have no recent commits:
    • GaoYusong (zhangpiu)
    • saladpalad
    • Jake-Song
    • gajanan-choudhary
    • NEWPLAN
    • invisiblepancake
    • KarhouTam

Patterns and Themes

  • Dominance of Andrej Karpathy: The majority of recent activity is concentrated on Andrej's contributions, indicating he is the primary driver of development at this stage.
  • Focus on Llama 3 Integration: Recent commits are heavily focused on integrating Llama 3 features, suggesting a significant push towards enhancing model capabilities.
  • Lack of Activity from Other Members: Most team members have not contributed recently, which may indicate either a temporary lull in their work or reliance on Andrej for current developments.

Conclusions

The development team is currently experiencing a phase of concentrated activity led by Andrej Karpathy, primarily focused on advancing the Llama 3 implementation. Other team members appear inactive in terms of recent commits, which may impact collaborative progress if this trend continues.