‹ Reports
The Dispatch

GitHub Repo Analysis: karpathy/llm.c


Given the detailed analysis of open issues, pull requests, and specific source files within the karpathy/llm.c project, we can draw several conclusions about the state of the project, its development trajectory, and areas that require attention.

Development Focus and Trajectory

The project is in an active state of development, with a clear focus on performance optimization, particularly through direct CUDA implementations. This is evident from both the open issues and pull requests which predominantly revolve around enhancing the efficiency of core operations like layer normalization, softmax, and attention mechanisms in CUDA. The introduction of features like Flash Attention 2 kernel (#60) and optimizations for layer normalization (PR #80) underscore a commitment to leveraging CUDA for performance gains.

Compatibility and support across different platforms and configurations also emerge as a significant area of focus. Issues related to compilation errors on macOS systems (#74) and requests for Windows x86 MSVC support (#65) highlight ongoing efforts to broaden the project's applicability. The inclusion of CMake support (PR #59) is a strategic move towards simplifying cross-platform builds, enhancing the developer experience significantly.

Team Contributions and Collaborations

Andrej Karpathy leads the project with substantial contributions across various aspects, from CUDA implementations to documentation updates. His role is pivotal not just in direct contributions but also in reviewing and merging pull requests from the community. This pattern of collaboration suggests a healthy open-source project dynamic where external contributions are welcomed and integrated into the main codebase.

Contributors like lancerts, scotthaleen, and VinciGit00 have focused on specific optimizations or fixes, indicating a community willing to tackle both performance enhancements and quality-of-life improvements. The diversity in contributions—from CUDA optimizations to documentation corrections—highlights a broad engagement with the project's goals.

Technical Risks and Anomalies

While the project demonstrates robust activity and engagement, several technical risks need addressing:

Recommendations for Future Development

Conclusion

karpathy/llm.c stands out as a promising project with active development focused on high-performance LLM training using C/CUDA. Its trajectory indicates ongoing improvements in performance optimization and platform compatibility. Addressing identified technical risks and fostering community engagement will be key to sustaining its growth and relevance in the machine learning ecosystem.

Quantified Commit Activity Over 14 Days

Developer Avatar Branches PRs Commits Files Changes
Andrej 1 0/0/0 26 24 6834
Rickard Hallerbäck 1 2/2/0 2 2 32
lancer 1 8/4/2 5 3 24
Marco Vinciguerra 1 1/1/0 1 1 13
スコット 1 1/1/0 1 1 6
Krishnaraj Bhat 1 1/1/0 1 1 5
Mr L 1 1/1/0 1 1 4
Onuralp SEZER 1 3/1/2 1 1 3
Alexander Ziskind 1 1/1/0 1 1 3
Ikko Eltociear Ashimine 1 1/1/0 1 1 2
Varun L A 1 1/1/0 1 1 2
DominguesAddem1974 1 1/1/0 1 1 2
Luis Quintanilla (lqdev) 0 1/0/0 0 0 0
None (ngc92) 0 2/0/0 0 0 0
zarlo (zarlo) 0 1/0/1 0 0 0
Adhitya Mohan (poad42) 0 1/0/1 0 0 0
None (richzw) 0 1/0/1 0 0 0
None (100apps) 0 1/0/1 0 0 0
Victor Anderssén (Avicted) 0 1/0/0 0 0 0
None (abuneri) 0 1/0/0 0 0 0
Antonio Stano (ent0n29) 0 1/0/0 0 0 0
None (AKBANK28) 0 1/0/1 0 0 0
Franz Louis Cesista (leloykun) 0 1/0/0 0 0 0
Toph Beifong (modigeko) 0 1/0/1 0 0 0
Cuda Chen (Cuda-Chen) 0 1/0/1 0 0 0
assehe marie claire (dimaclara) 0 1/0/1 0 0 0
None (sirvan3tr) 0 1/0/0 0 0 0
Arturo de los Rios (Artuurodrt) 0 1/0/0 0 0 0
Nuño Sempere (NunoSempere) 0 1/0/1 0 0 0
Albert Lee (grepinsight) 0 1/0/0 0 0 0
John Rose (johnrose3000) 0 1/0/1 0 0 0
None (risingMantis) 0 1/0/1 0 0 0
Andre Slavescu (AndreSlavescu) 0 1/0/0 0 0 0
Ayush Anshul (ayushanshul07) 0 1/0/1 0 0 0
Chad Brewbaker (chadbrewbaker) 0 1/0/0 0 0 0
Abhirup Gupta (this-is-batman) 0 1/0/1 0 0 0
edwixx (anurag12-webster) 0 1/0/1 0 0 0

PRs: created by that dev and opened/merged/closed-unmerged during the period

~~~

Given the detailed analysis of open issues, pull requests, and a high-level overview of selected source files for the karpathy/llm.c project, several strategic insights and recommendations can be drawn for the CEO's consideration:

Strategic Insights

  1. Active Development & Community Engagement: The project is in an active state of development, with both core team members and external contributors playing significant roles. This vibrancy is a positive indicator of the project's health and its potential for sustained growth and innovation.

  2. Performance Optimization Focus: A considerable amount of effort is being directed towards performance optimization, especially around CUDA implementations. This focus is crucial for maintaining the competitive edge of llm.c in the field of large language models, where execution speed and efficiency are paramount.

  3. Cross-Platform Compatibility Challenges: Issues and pull requests reveal ongoing challenges with cross-platform compatibility, particularly concerning macOS and Windows support. Addressing these challenges is essential for broadening the user base and ensuring that developers on all platforms can contribute to and benefit from llm.c.

  4. Code Quality and Maintenance: The project demonstrates a commitment to code quality and maintenance, with numerous contributions aimed at fixing typos, improving documentation, and refining code structure. This attention to detail is vital for long-term sustainability.

Recommendations for Strategic Decision-Making

  1. Invest in Developer Experience: To attract more contributors and users, consider investing resources in improving the developer experience. This could include better documentation, more comprehensive setup guides, and tooling to simplify the development process.

  2. Expand Platform Support: Allocating resources to resolve compatibility issues on macOS and Windows can significantly expand the project's reach. This might involve dedicating a team to work on these specific challenges or collaborating with external experts in these areas.

  3. Prioritize Performance Benchmarks: Given the project's emphasis on performance optimization, establishing a robust benchmarking system could provide clear targets for improvements and demonstrate the project's capabilities to potential users and contributors.

  4. Enhance Testing and Quality Assurance: Expanding automated testing, especially around new CUDA features or optimizations, can help prevent regressions and ensure that optimizations deliver the expected performance gains without side effects.

  5. Strategic Partnerships for Growth: Exploring partnerships with academic institutions or industry players working on similar technologies could provide valuable insights, share workload on common challenges (like cross-platform support), and increase the project's visibility.

  6. Resource Allocation for Maintenance vs. Innovation: Balancing resources between maintaining existing features (e.g., fixing bugs, ensuring compatibility) and pursuing innovative optimizations or new features will be crucial. This balance will impact the project's ability to remain at the forefront of technology while also being stable and reliable for users.

  7. Community Building Initiatives: Engaging with the user community through forums, social media, or developer events can foster a stronger connection between users and developers, encourage more contributions, and provide direct feedback channels for improving llm.c.

By focusing on these strategic areas, llm.c can continue to grow as a leading solution for training large language models efficiently while fostering a vibrant community of developers and users around it.

Quantified Commit Activity Over 14 Days

Developer Avatar Branches PRs Commits Files Changes
Andrej 1 0/0/0 26 24 6834
Rickard Hallerbäck 1 2/2/0 2 2 32
lancer 1 8/4/2 5 3 24
Marco Vinciguerra 1 1/1/0 1 1 13
スコット 1 1/1/0 1 1 6
Krishnaraj Bhat 1 1/1/0 1 1 5
Mr L 1 1/1/0 1 1 4
Onuralp SEZER 1 3/1/2 1 1 3
Alexander Ziskind 1 1/1/0 1 1 3
Ikko Eltociear Ashimine 1 1/1/0 1 1 2
Varun L A 1 1/1/0 1 1 2
DominguesAddem1974 1 1/1/0 1 1 2
Luis Quintanilla (lqdev) 0 1/0/0 0 0 0
None (ngc92) 0 2/0/0 0 0 0
zarlo (zarlo) 0 1/0/1 0 0 0
Adhitya Mohan (poad42) 0 1/0/1 0 0 0
None (richzw) 0 1/0/1 0 0 0
None (100apps) 0 1/0/1 0 0 0
Victor Anderssén (Avicted) 0 1/0/0 0 0 0
None (abuneri) 0 1/0/0 0 0 0
Antonio Stano (ent0n29) 0 1/0/0 0 0 0
None (AKBANK28) 0 1/0/1 0 0 0
Franz Louis Cesista (leloykun) 0 1/0/0 0 0 0
Toph Beifong (modigeko) 0 1/0/1 0 0 0
Cuda Chen (Cuda-Chen) 0 1/0/1 0 0 0
assehe marie claire (dimaclara) 0 1/0/1 0 0 0
None (sirvan3tr) 0 1/0/0 0 0 0
Arturo de los Rios (Artuurodrt) 0 1/0/0 0 0 0
Nuño Sempere (NunoSempere) 0 1/0/1 0 0 0
Albert Lee (grepinsight) 0 1/0/0 0 0 0
John Rose (johnrose3000) 0 1/0/1 0 0 0
None (risingMantis) 0 1/0/1 0 0 0
Andre Slavescu (AndreSlavescu) 0 1/0/0 0 0 0
Ayush Anshul (ayushanshul07) 0 1/0/1 0 0 0
Chad Brewbaker (chadbrewbaker) 0 1/0/0 0 0 0
Abhirup Gupta (this-is-batman) 0 1/0/1 0 0 0
edwixx (anurag12-webster) 0 1/0/1 0 0 0

PRs: created by that dev and opened/merged/closed-unmerged during the period

Detailed Reports

Report On: Fetch issues



Analysis of Open Issues for the Software Project

Notable Problems and Uncertainties

Performance and Optimization

  • Issue #80: This issue discusses potential speed-ups with assumptions like a more recent CUDA compiler and channel-dimension multiples of 4. It's uncertain if these assumptions are acceptable for the project's goals, but they could lead to performance gains.
  • Issue #79: The online softmax CPU code and its GPU port are discussed, with performance comparisons to other kernels. There's uncertainty about the benefits of online softmax, which needs further optimization to match other kernels' performance.
  • Issue #76: A slightly faster GELU implementation is proposed. However, it's unclear if the marginal speed improvement justifies the change.
  • Issue #60: Proposes a speedup by implementing Flash Attention 2 kernel, which could significantly improve attention_forward_kernel2. However, it's not clear if maintaining multiple versions of Flash Attention is beneficial.

Compatibility and Support

  • Issue #77: Reports a loss mismatch error on an iMac with AMD GPU. This indicates potential compatibility issues with non-NVIDIA GPUs.
  • Issue #74: An error related to OpenMP initialization hints at compatibility issues on macOS systems, possibly due to multiple OpenMP runtimes linked into the program.
  • Issue #70 and Issue #69: Both issues report errors during compilation on specific systems, indicating potential issues with cross-platform support or environment setup.
  • Issue #65: A request for Windows x86 MSVC support is notable as it would extend the project's reach to a broader audience of developers.
  • Issue #63: An error related to CUDA version compatibility suggests there may be a need for clearer documentation on version requirements or better handling of different CUDA versions.

Code Quality and Maintenance

  • Issue #67: Addresses a TODO in the code for calculating the max value neatly in softmax functions. While this seems like a minor code quality improvement, it also raises concerns about handling edge cases where certain values are zero.
  • Issue #62: Highlights the need for a check for CUDA availability before synchronizing in train_gpt2.py to prevent failures on systems without CUDA.

TODOs and Anomalies

  • Issue #80, Issue #79, Issue #76, and Issue #60: All discuss optimizations that are not yet finalized or require further work to be integrated effectively into the project.
  • Issue #77, Issue #74, Issue #70, and Issue #69: These issues require further investigation to resolve system-specific errors and improve cross-platform support.
  • Issue #65 and Issue #63: Indicate ongoing work to support different platforms and handle CUDA toolchain requirements.

General Context from Closed Issues

Recent closed issues that might provide context include:

  • Issue #78 and Issue #75: Closed quickly, possibly duplicates or created in error.
  • Issues related to environment setup (#73, #72, #71): These were addressed promptly, suggesting active maintenance in response to user feedback.

Summary

The open issues indicate active development focused on performance optimization and compatibility across different platforms. There are uncertainties regarding the adoption of certain optimizations and how they align with the project's goals. Additionally, there are several TODOs related to improving code quality and addressing system-specific errors. The recent trend in closed issues shows responsiveness to community contributions and environment setup concerns.

Report On: Fetch pull requests



Analysis of Pull Requests for karpathy/llm.c

Open Pull Requests

Notable PRs:

  • PR #80: Draft: Layer norm v2

    • Created very recently and contains several optimizations for layer normalization in CUDA.
    • Not intended to be merged as is, but rather a demonstration of possible speed-ups.
    • Changes the number of channels in the benchmark, which could affect performance comparisons.
  • PR #79: Include the online softmax CPU code and native port to GPU kernel

    • Introduces an algorithmic improvement by reducing the number of loops required in the softmax computation.
    • The author acknowledges that further optimizations are needed to match the performance of other kernels.
  • PR #76: Slightly faster gelu on smaller blocksize contexts

    • Proposes a new gelu implementation with slight performance improvements.
    • Includes benchmark results showing a modest speedup.
  • PR #67: Fixed a TODO to calculate the max value neatly and use inv sum trick

    • Addresses a TODO comment in the code for a cleaner max value calculation.
    • Also suggests a multiplication instead of division for normalization, which may be faster on some hardware.
  • PR #62: Add check for CUDA availability before synchronizing in train_gpt2.py

    • Adds a necessary check for CUDA availability to prevent errors when CUDA is not enabled.
  • PR #60: Speedup attention_forward_kernel2 by implementing Flash Attention 2 kernel

    • Replaces an existing attention kernel with a more efficient Flash Attention 2 kernel.
    • The PR author suggests maintaining both Flash Attention 1 and 2 for educational purposes and future development.
  • PR #59: Add CMake project for cross-platform support and easier quick start setup

    • Adds CMake support to simplify building and setting up the project on different platforms, including Windows.
    • This PR could significantly improve the developer experience.
  • PR #55: Add Dev Container Support for CPU and GPU

    • Adds support for Dev Containers, which can greatly simplify environment setup for developers using Visual Studio Code or GitHub Codespaces.
  • PR #51: Fully fused layer-norm kernel

    • Another optimization proposal for layer normalization without requiring shared memory or intermediate buffers.
    • The PR includes performance benchmarks showing significant speedup.

PRs Closed Without Merge:

  • PR #78: Correction du readme

    • Closed without merge, appears to be a user's practice PR rather than an actual project contribution.
  • PR #75: Include the online softmax CPU code and native port to GPU kernel (Draft)

    • Closed without merge, likely because it was superseded by PR #79.

Closed Pull Requests

Notable Merges:

  • PR #72: -O3 cannot go with -Ofast

    • Merged change to README clarifying compiler optimization flags.
  • PR #64: [train_gpt2.py] synchronize based on device

    • Merged fix for synchronization based on device availability in Python script.
  • PR #56: Detect OpenMP support - macOS Intel

    • Merged addition of homebrew path to detect libomp on macOS Intel systems.
  • PR #48: Fix error in small typos in matmul_forward.cu

    • Merged correction of typos related to matrix multiplication forward pass.
  • PR #34: Free the memory in layernorm.c

    • Merged changes that free memory in layer normalization example code, addressing potential memory leaks.

Notable Non-Merges:

  • PR #71: Organize defined constants

    • Closed without merge; changes were made directly by repository owner instead.
  • PR #68: Improve numerical stability in loss calculation

    • Closed without merge; repository owner decided against adding a small constant to loss calculations due to potential introduction of bias.

Overall, there is active development and optimization work being done on CUDA kernels, particularly around layer normalization and softmax operations. Several PRs aim to improve cross-platform compatibility and developer experience. It's important that these changes are thoroughly reviewed and tested to ensure they do not introduce regressions or negatively impact performance.

Report On: Fetch Files For Assessment



Source Code Assessment

General Overview

The repository karpathy/llm.c is focused on providing a lightweight and efficient implementation of large language models (LLMs) like GPT-2 using C and CUDA. This approach aims to reduce the dependency on large frameworks like PyTorch or TensorFlow, making the codebase more accessible and easier to understand, modify, and optimize for specific hardware configurations.

Detailed Analysis of Selected Files

  1. dev/cuda/attention_forward.cu

    • Purpose: Implements the attention mechanism, a critical component of the transformer architecture used in models like GPT-2.
    • Structure: Likely contains CUDA kernels for efficiently computing the multi-head attention mechanism directly on the GPU.
    • Quality: Given its role, the file is expected to be highly optimized for GPU operations. The use of CUDA suggests an emphasis on performance, particularly in handling large matrices typical in attention mechanisms.
    • Assessment: This file is crucial for performance. Optimizations might include memory access patterns, usage of shared memory, and minimizing latency and bandwidth bottlenecks.
  2. train_gpt2.cu

    • Purpose: Handles the training loop for GPT-2 on CUDA, integrating various components like attention, feed-forward networks, and layer normalization.
    • Structure: This file likely orchestrates the data flow through different layers of the model, managing both forward and backward passes, and updates model parameters based on gradients.
    • Quality: The integration of entire model training in CUDA is complex and requires careful management of GPU resources, memory handling, and error checking.
    • Assessment: As a recent addition, this file's structure is critical for ensuring efficient training cycles on GPUs. It should be well-documented and maintainable to handle future upgrades or changes in model architecture.
  3. doc/layernorm/layernorm.md

    • Purpose: Provides a tutorial on implementing layer normalization in GPT-2, which is essential for stabilizing the mean and variance of each layer's inputs.
    • Structure: Includes both theoretical explanations and practical code snippets showing how to implement layer normalization in Python using PyTorch and then translating that into C.
    • Quality: The document is thorough, blending theory with practical application. It serves as an educational tool for developers unfamiliar with layer normalization or those new to implementing such concepts in lower-level languages like C.
    • Assessment: This tutorial is invaluable for onboarding new contributors to the project or for educational purposes. It should be kept up-to-date with best practices in both machine learning theory and software implementation.
  4. train_gpt2.py

    • Purpose: Facilitates training GPT-2 with device-specific optimizations in Python, likely interfacing with C/CUDA implementations for performance-critical operations.
    • Structure: Manages data loading, model configuration, training loop execution including forward and backward passes, and synchronization across different computing devices.
    • Quality: The presence of device-specific optimizations suggests a focus on performance. The use of high-level Python code for orchestration allows leveraging existing libraries and tools for tasks like data handling and optimization loops.
    • Assessment: This file is crucial for ensuring that the Python wrapper efficiently interacts with lower-level CUDA code. It should handle device-specific peculiarities transparently to maximize code portability and performance.

Recommendations

  • Documentation: Enhance inline comments and external documentation to describe complex operations, especially in CUDA files where hardware-specific optimizations can be obscure.
  • Testing: Expand unit tests to cover more scenarios, especially edge cases in CUDA operations which can be prone to subtle bugs due to parallel execution.
  • Optimization: Continuously profile the CUDA implementations to identify bottlenecks. Consider newer CUDA features or different parallelization strategies as they become available.
  • Maintainability: Refactor complex sections into smaller functions or modules where possible to improve readability and maintainability.

Overall, the repository demonstrates a robust approach to implementing LLMs with an emphasis on efficiency and minimal dependencies. The selected files are key components that contribute significantly to the project's goals.

Report On: Fetch commits



Project Analysis: karpathy/llm.c

The project in question is llm.c, a software initiative aimed at training large language models (LLMs) such as GPT-2 in a simplified and efficient manner using pure C/CUDA. The project is spearheaded by Andrej Karpathy, a well-known figure in the machine learning community. The project's goal is to eliminate the need for heavy dependencies like PyTorch and Python, instead offering a lightweight alternative that compiles and runs instantly while matching the performance of established implementations. As of the latest information, the project has gained significant traction in the open-source community, with a high number of stars and forks on GitHub, indicating its popularity and potential for growth.

The project's trajectory includes ongoing work on direct CUDA implementation for performance gains, optimization of the CPU version with SIMD instructions, and plans to support more modern architectures. The repository also includes a quick start guide, a tutorial on implementing layer normalization in C, and various scripts for preprocessing datasets.

Team Members and Recent Activity

Andrej (karpathy)

  • Recent Commits: 26 commits with extensive modifications across multiple files.
  • Collaborations: Merged pull requests from various contributors.
  • Key Contributions:
    • Implemented full forward pass of GPT-2 in CUDA.
    • Optimized CUDA kernels for various operations like attention and softmax.
    • Updated documentation and README files.
    • Addressed compilation issues and provided fixes.

Soldy

  • Recent Commits: 1 commit adjusting README instructions regarding compilation flags.
  • Collaborations: Merged by Andrej (karpathy).
  • Key Contributions:
    • Clarified compilation flags in README to avoid conflicts between -O3 and -Ofast.

krrishnarraj

  • Recent Commits: 1 commit to synchronize based on device in train_gpt2.py.
  • Collaborations: Merged by Andrej (karpathy).
  • Key Contributions:
    • Improved synchronization logic in training script for better device compatibility.

scotthaleen

  • Recent Commits: 1 commit fixing OpenMP path for macOS Intel systems.
  • Collaborations: Merged by Andrej (karpathy).
  • Key Contributions:
    • Enhanced Makefile to detect OpenMP support on macOS Intel systems.

lancerts

  • Recent Commits: 5 commits with minor fixes in CUDA files.
  • Collaborations: Multiple PRs merged by Andrej (karpathy).
  • Key Contributions:
    • Fixed small typos and consistency issues in CUDA matrix multiplication code.
    • Addressed an undefined identifier issue in GELU kernel.

eltociear

  • Recent Commits: 1 commit correcting a typo in layernorm.md.
  • Collaborations: Merged by Andrej (karpathy).
  • Key Contributions:
    • Corrected documentation typo from "burried" to "buried".

VinciGit00

  • Recent Commits: 1 commit adding memory deallocation in layernorm.c.
  • Collaborations: Merged by Andrej (karpathy).
  • Key Contributions:
    • Ensured proper memory management by freeing allocated memory.

DominguesAddem1974

  • Recent Commits: 1 commit fixing a torch warning in train_gpt2.py.
  • Collaborations: Merged by Andrej (karpathy).
  • Key Contributions:
    • Resolved a torch-related warning message during Python demo execution.

varunlakkur

  • Recent Commits: 1 commit fixing a typo in README.
  • Collaborations: Merged by Andrej (karpathy).
  • Key Contributions:
    • Corrected a textual error within the project's README file.

Patterns and Conclusions

The development activity on the llm.c project shows a strong focus on performance optimization, particularly through CUDA implementations. The lead developer, Andrej Karpathy, is highly active, both contributing code directly and integrating changes from the community. There is a clear pattern of collaboration where external contributors address smaller issues or provide enhancements that are then reviewed and merged by Karpathy. This indicates an open and receptive approach to community contributions.

The recent activity also highlights attention to detail with numerous small fixes to documentation and code comments, suggesting an emphasis on code readability and maintainability. The frequent updates to README files imply that keeping users informed about changes and guiding them through potential issues is a priority for the team.

Overall, the project appears to be progressing well with active development focused on refining existing features, expanding capabilities, and ensuring user accessibility through clear documentation.

Quantified Commit Activity Over 14 Days

Developer Avatar Branches PRs Commits Files Changes
Andrej 1 0/0/0 26 24 6834
Rickard Hallerbäck 1 2/2/0 2 2 32
lancer 1 8/4/2 5 3 24
Marco Vinciguerra 1 1/1/0 1 1 13
スコット 1 1/1/0 1 1 6
Krishnaraj Bhat 1 1/1/0 1 1 5
Mr L 1 1/1/0 1 1 4
Onuralp SEZER 1 3/1/2 1 1 3
Alexander Ziskind 1 1/1/0 1 1 3
Ikko Eltociear Ashimine 1 1/1/0 1 1 2
Varun L A 1 1/1/0 1 1 2
DominguesAddem1974 1 1/1/0 1 1 2
Luis Quintanilla (lqdev) 0 1/0/0 0 0 0
None (ngc92) 0 2/0/0 0 0 0
zarlo (zarlo) 0 1/0/1 0 0 0
Adhitya Mohan (poad42) 0 1/0/1 0 0 0
None (richzw) 0 1/0/1 0 0 0
None (100apps) 0 1/0/1 0 0 0
Victor Anderssén (Avicted) 0 1/0/0 0 0 0
None (abuneri) 0 1/0/0 0 0 0
Antonio Stano (ent0n29) 0 1/0/0 0 0 0
None (AKBANK28) 0 1/0/1 0 0 0
Franz Louis Cesista (leloykun) 0 1/0/0 0 0 0
Toph Beifong (modigeko) 0 1/0/1 0 0 0
Cuda Chen (Cuda-Chen) 0 1/0/1 0 0 0
assehe marie claire (dimaclara) 0 1/0/1 0 0 0
None (sirvan3tr) 0 1/0/0 0 0 0
Arturo de los Rios (Artuurodrt) 0 1/0/0 0 0 0
Nuño Sempere (NunoSempere) 0 1/0/1 0 0 0
Albert Lee (grepinsight) 0 1/0/0 0 0 0
John Rose (johnrose3000) 0 1/0/1 0 0 0
None (risingMantis) 0 1/0/1 0 0 0
Andre Slavescu (AndreSlavescu) 0 1/0/0 0 0 0
Ayush Anshul (ayushanshul07) 0 1/0/1 0 0 0
Chad Brewbaker (chadbrewbaker) 0 1/0/0 0 0 0
Abhirup Gupta (this-is-batman) 0 1/0/1 0 0 0
edwixx (anurag12-webster) 0 1/0/1 0 0 0

PRs: created by that dev and opened/merged/closed-unmerged during the period