‹ Reports
The Dispatch

GitHub Repo Analysis: OpenBMB/MiniCPM


MiniCPM: Unveiling the Infinite Potential of End-side Large Language Models

Overview

MiniCPM is an open-source series of end-side large models jointly released by Mianbi Intelligence and the Natural Language Processing Laboratory of Tsinghua University. The main language model, MiniCPM-2B, has only 2.4 billion non-embedding parameters. It has shown comparable or superior performance to several larger models in various benchmarks, especially in Chinese, mathematics, and coding capabilities. MiniCPM can be deployed on mobile devices with efficient inference capabilities and has a low secondary development cost.

Apparent Issues and TODOs

Recent Development Activities

Recent activities of the development team include updates to the README files, the addition of demo deployment scripts, and fine-tuning scripts. The team members and their contributions are as follows:

The team has collaborated on updating documentation, refining the model's deployment capabilities, and ensuring the model's performance is well-documented. The pattern of activity suggests a strong focus on improving user experience and accessibility of the model, as well as preparing the model for broader use cases.

Conclusions

The development team is actively working on improving the model's usability and addressing its limitations. The recent commits indicate a collaborative effort in enhancing documentation, deployment, and fine-tuning capabilities. The team's responsiveness to issues and their commitment to open-sourcing their work for academic research and limited commercial use is evident from their recent activities.

Link to the MiniCPM repository



# MiniCPM: Unveiling the Infinite Potential of End-side Large Language Models

## Overview

MiniCPM is an open-source project that represents a significant contribution to the field of natural language processing. The collaboration between Mianbi Intelligence and Tsinghua University has resulted in a model that is not only efficient but also demonstrates superior performance in certain benchmarks. The strategic implications of deploying such a model on mobile devices are vast, potentially opening up new market opportunities and applications where computational resources are limited.

## Apparent Issues and TODOs

The project faces several challenges that are common among large language models:

- **Hallucination Issues**: The model's tendency to produce hallucinatory responses needs to be addressed to ensure reliability.
- **Lack of Identity Training**: Without identity training, there's a risk of outputting sensitive identity information.
- **Prompt Sensitivity**: The model's high sensitivity to prompts could lead to inconsistent performance.
- **Inaccurate Knowledge Memory**: Enhancements are planned to improve the model's knowledge recall capabilities.

## Recent Development Activities

The development team has been focused on improving documentation and deployment scripts, which is crucial for adoption and ease of use. The recent activities suggest a strategic push towards making the model more accessible and user-friendly.

- **huangyuxiang03**: Contributed to CPU inference optimization and documentation improvements.
- **ShengdingHu (DingDing)**: Worked on README updates in multiple languages, enhancing global accessibility.
- **SUDA-HLT-ywfang (Y.W. Fang)**: Addressed demo issues and improved user-facing content.
- **soulteary (Su Yang)**: Enhanced configuration options for the demo, improving flexibility.
- **zkh2016 (zhangkaihuo)**, **tuantuanzhang (zhang xinrong)**, **Achazwl (William)**, **SwordFaith (Xiang Long)**, **THUCSTHanxu13 (SillyXu)**, **jctime (Chao Jia)**: All contributed to documentation, which is key for user comprehension and engagement.

The pattern of recent commits indicates a concerted effort to refine the model's usability and documentation. Collaboration among team members on these fronts is evident, suggesting a well-coordinated team.

## Conclusions

The development team's recent focus on documentation and deployment suggests a strategy aimed at broadening the user base and ensuring the model's capabilities are well understood. The responsiveness to issues and open-source ethos positions MiniCPM favorably in the academic and limited commercial spheres.

[Link to the MiniCPM repository](https://github.com/OpenBMB/MiniCPM)

---

### Notable Problems and Uncertainties:

Deployment and compatibility issues, feature requests, and model performance concerns are the main categories of open issues. Addressing these will be critical for user satisfaction and the project's success.

### TODOs and Anomalies:

Improvements in documentation and addressing compatibility issues are among the top TODOs. These are strategic areas that can significantly impact the adoption rate and user experience.

### Recently Closed Issues:

The responsiveness to issues such as model loading errors and user experience improvements is a positive sign. It indicates an active and engaged development team, which is crucial for maintaining momentum and trust within the user community.

---

### Analysis of Open Pull Requests:

The open PRs indicate ongoing efforts to improve the project's documentation and features. PR [#28](https://github.com/OpenBMB/MiniCPM/issues/28)'s markdown issue is a minor but important detail that needs attention to maintain professionalism in documentation.

### Analysis of Recently Closed Pull Requests:

The closed PRs show a healthy project lifecycle with new features being added and documentation being improved. The addition of issue templates in PR [#1](https://github.com/OpenBMB/MiniCPM/issues/1) is a strategic move to streamline contributions and issue tracking.

### General Recommendations:

The project should continue to prioritize user experience and documentation. Open PRs need to be reviewed and tested with an emphasis on maintaining high-quality contributions. Monitoring the impact of recent changes is also important to ensure they meet user needs and do not introduce new issues.

MiniCPM: Unveiling the Infinite Potential of End-side Large Language Models

Overview

MiniCPM is an open-source project that provides a series of end-side large language models. The project, a collaboration between Mianbi Intelligence and the Natural Language Processing Laboratory of Tsinghua University, features the MiniCPM-2B model with 2.4 billion non-embedding parameters. This model has demonstrated impressive performance across various benchmarks, particularly in Chinese language processing, mathematics, and coding tasks. Its efficient inference capabilities make it suitable for deployment on mobile devices, and it offers a low barrier to secondary development.

Apparent Issues and TODOs

Several issues have been identified that could impact the model's performance and user experience:

Recent Development Activities

The development team has been focusing on improving the documentation, deployment scripts, and fine-tuning processes. Their recent contributions are as follows:

The team's recent activities suggest a concerted effort to enhance the model's usability and documentation. The focus on user experience is evident from the updates to the README files and the addition of deployment and fine-tuning scripts.

Conclusions

The MiniCPM development team is actively working to improve the model's usability and address its current limitations. The pattern of recent commits indicates a strong emphasis on enhancing documentation and deployment capabilities, preparing the model for broader use, and improving the overall user experience. The team's responsiveness to issues and their commitment to open-source development are commendable.

Link to the MiniCPM repository


Analysis of Open Issues

The open issues in the MiniCPM repository reveal several areas of concern that the development team needs to address:

  1. Deployment and Compatibility Issues:

    • Issues such as #34, #33, and #31 indicate a need for deployment support on various platforms, which is critical for wider adoption.
    • Memory requirements raised in #30 are a concern for users with limited resources.
    • The demand for easier setup options in #27 and #22, like Docker support and Google Colab links, suggests a need for more accessible quick-start guides.
    • Compatibility issues with macOS and Apple Silicon in #12 and #6 could limit the user base if not resolved.
  2. Feature Requests and Enhancements:

    • Requests for more openness in #7 and #4, such as releasing the base model and providing a detailed requirements.txt, highlight the need for transparency and ease of setup.
    • Annoying log alerts mentioned in #13 and #15, while not critical, can detract from the user experience.
  3. Model Performance and Behavior:

    • A "Bad Case" discussed in #17, where the model repeats output, could significantly impact the quality of the model's responses.
    • The absence of a requirements.txt file mentioned in #8 could lead to user difficulties when trying to run the model, indicating a need for better documentation.

TODOs and Anomalies

Recently Closed Issues

Conclusion

The MiniCPM project is actively developed with a focus on expanding compatibility and ease of use. There is a clear need for improved documentation, quick start options, and resolving compatibility issues to ensure a wider user base can effectively use the model. The project team should prioritize addressing deployment-related issues and enhancing documentation to facilitate a smoother user experience.


Analysis of Open Pull Requests

PR #28: Update README.md

PR #25: feat: colab online demo #22

PR #16: fix: set pad token id #15

PR #14: fix: ignore typedstorage deprecated message #13

Analysis of Recently Closed Pull Requests

PR #21: feat: allow user set torch dtype #20

PR #19: feat: allow user change the demo host and port #18

PR #2: docs(readme): add mini-cpm-v readme

PR #1: Add issue templates

General Recommendations

~~~

Detailed Reports

Report On: Fetch issues



Analyzing the open issues for the software project, we can identify several trends, notable problems, and uncertainties that may affect the project's progress and user experience.

Notable Problems and Uncertainties:

  1. Deployment and Compatibility Issues:

    • Issue #34, #33, and #31 all relate to deployment requests for the model, indicating a high demand for deployment support across different platforms and systems.
    • Issue #30 raises a concern about system memory requirements, which is critical for users with limited resources.
    • Issue #27 and #22 suggest the need for easier setup and quick start options like Docker support and Google Colab links, which can lower the barrier to entry for new users.
    • Issue #12 and #6 highlight compatibility issues with macOS and Apple Silicon, respectively, which could limit the user base if not addressed.
  2. Feature Requests and Enhancements:

    • Issue #7 and #4 indicate a demand for more openness, such as the release of the base model and a detailed requirements.txt file, which would help users understand and work with the model more effectively.
    • Issue #13 and #15 mention annoying log alerts, which, while not critical, can degrade the user experience.
  3. Model Performance and Behavior:

    • Issue #17 discusses a "Bad Case" where the model repeats output, which could be a significant issue affecting the quality of the model's output.
    • Issue #8 mentions the absence of a requirements.txt file, which could lead to user problems when trying to run the model, indicating a need for better documentation and setup instructions.

TODOs and Anomalies:

  • Documentation Improvements:

    • Issue #4 and #8 both point to the need for a detailed requirements.txt. This is a simple yet crucial addition that would greatly assist users in setting up their environments.
    • Issue #12 suggests that the documentation should clearly state that MiniCPM does not support macOS, which is important for setting user expectations.
  • Compatibility and Support:

    • The project needs to address compatibility issues with macOS (Issue #12) and Apple Silicon (Issue #6). This could involve code updates or clear documentation on supported platforms.
    • Issue #27 requests Docker support, which is a common way to simplify deployment and ensure consistent environments across different systems.
  • Feature Enhancements:

    • There is a clear demand for deployment support on various platforms, as seen in Issue #34, #33, and #31.
    • Issue #22 and #27 indicate that users are looking for quick start options like Google Colab and Docker, which can help them test and use the model without going through a complex setup process.

Recently Closed Issues:

  • The recently closed issues, such as #32, #29, #26, #24, #23, #20, #18, #11, #10, and #9, suggest that the project maintainers are responsive and actively working on resolving issues related to model details, fine-tuning, and user experience improvements.
  • The closure of Issue #9, which was about model loading errors after conversion, indicates that compatibility with different versions of dependencies like HF transformers is an ongoing concern that requires attention.

Conclusion:

The project seems to be actively developed with a focus on expanding compatibility and ease of use. There is a need for better documentation, quick start options, and addressing compatibility issues to ensure a broader user base can effectively use the model. The project team should prioritize resolving deployment-related issues and enhancing documentation to facilitate a smoother user experience.

Report On: Fetch pull requests



Analysis of Open Pull Requests:

PR #28: Update README.md

  • Summary: This PR aims to update the README.md file by adding test information for the Redmi k50.
  • Notable Issues: There is a comment from SwordFaith indicating that the markdown table is not being handled correctly, which suggests that the PR may have formatting issues that need to be addressed before merging.
  • Action Items: Review the markdown formatting and ensure that the table is displayed correctly. It may require additional commits to fix the issue.

PR #25: feat: colab online demo #22

  • Summary: This PR is linked to issue #22 and adds information about an online demo using Google Colab to both the English and standard README files.
  • Notable Issues: No immediate issues are evident from the PR description. The PR seems straightforward, adding documentation for a new feature.
  • Action Items: Verify that the added instructions are clear and accurate, and that the demo works as intended. If everything checks out, this PR can likely be merged.

PR #16: fix: set pad token id #15

  • Summary: Linked to issue #15, this PR addresses a warning by setting the pad token ID in a demo file.
  • Notable Issues: There are no comments indicating any problems with this PR.
  • Action Items: Review the code change to ensure it resolves the warning without introducing any new issues. If it looks good, this PR can be approved and merged.

PR #14: fix: ignore typedstorage deprecated message #13

  • Summary: This PR is associated with issue #13 and aims to suppress a deprecation warning related to typedstorage.
  • Notable Issues: No comments or issues have been highlighted for this PR.
  • Action Items: Check that the warning suppression is appropriate and does not hide any underlying issues that should be addressed. If the change is satisfactory, the PR can be merged.

Analysis of Recently Closed Pull Requests:

PR #21: feat: allow user set torch dtype #20

  • Summary: This PR, linked to issue #20, allows users to set the torch data type in demo files.
  • Notable Issues: The PR was merged with a comment from SUDA-HLT-ywfang indicating approval ("LGTM. merged.").
  • Action Items: Since this PR is already merged, no action is required unless new issues arise from this change.

PR #19: feat: allow user change the demo host and port #18

  • Summary: Linked to issue #18, this PR enables users to change the host and port for the demo.
  • Notable Issues: The PR was merged with a comment from SUDA-HLT-ywfang about adjusting the readme later.
  • Action Items: Ensure that the readme is updated to reflect the new functionality. Monitor for any issues that users may report related to this change.

PR #2: docs(readme): add mini-cpm-v readme

  • Summary: This PR adds a significant amount of content to the README files and includes an image asset.
  • Notable Issues: The PR was merged without any comments indicating problems.
  • Action Items: No action required unless there are issues with the new content that need to be corrected post-merge.

PR #1: Add issue templates

  • Summary: This PR adds multiple issue templates to the repository to streamline the process of reporting bugs, requesting features, and reporting bad cases.
  • Notable Issues: The PR was merged and included a reference to another contributor, suggesting collaboration.
  • Action Items: No immediate action required. However, it's important to monitor the usage of these templates to ensure they are clear and effective for contributors.

General Recommendations:

  • For Open PRs: Review and test all open PRs thoroughly. Pay special attention to PR #28's markdown formatting issue. Ensure that all PRs are linked to their corresponding issues for traceability.
  • For Closed PRs: No significant issues have been noted for recently closed PRs. However, it's important to monitor the effects of these changes and be ready to address any new issues that may arise as a result of these merges.
  • Overall: It's crucial to maintain clear communication with contributors, especially when there are open comments or potential issues with a PR. Additionally, ensure that documentation is kept up to date with any new features or changes to the project.

Report On: Fetch commits



MiniCPM: Unveiling the Infinite Potential of End-side Large Language Models

Overview

MiniCPM is an open-source series of end-side large models jointly released by Mianbi Intelligence and the Natural Language Processing Laboratory of Tsinghua University. The main language model, MiniCPM-2B, has only 2.4 billion non-embedding parameters. It has shown comparable or superior performance to several larger models in various benchmarks, especially in Chinese, mathematics, and coding capabilities. MiniCPM can be deployed on mobile devices with efficient inference capabilities and has a low secondary development cost.

Apparent Issues and TODOs

  • Hallucination Issues: Due to its scale, the model might produce hallucinatory responses, especially with the DPO model that generates longer replies.
  • Lack of Identity Training: The model has not undergone any identity training, which might lead to the output of identity information similar to GPT models.
  • Prompt Sensitivity: The model's outputs are highly influenced by prompts, which may result in inconsistent results upon multiple attempts.
  • Inaccurate Knowledge Memory: The model's capacity for accurate knowledge recall is limited, and future enhancements are planned using the RAG method.

Recent Development Activities

Recent activities of the development team include updates to the README files, the addition of demo deployment scripts, and fine-tuning scripts. The team members and their contributions are as follows:

  • huangyuxiang03: Fixed issues with flash attention and CPU inference, updated README files, and added support for vLLM.
  • ShengdingHu (DingDing): Updated README files in both English and Chinese, and provided links to technical blogs and model downloads.
  • SUDA-HLT-ywfang (Y.W. Fang): Fixed typos in the demo, updated README files, and replaced text cases with images.
  • soulteary (Su Yang): Added features to set torch dtype and change the demo host and port.
  • zkh2016 (zhangkaihuo): Updated README files.
  • tuantuanzhang (zhang xinrong): Added a replicate URL to the README.
  • Achazwl (William): Updated English README and community links.
  • SwordFaith (Xiang Long): Added fine-tuning scripts and fixed related issues.
  • THUCSTHanxu13 (SillyXu): Updated README files.
  • jctime (Chao Jia): Updated README files.

The team has collaborated on updating documentation, refining the model's deployment capabilities, and ensuring the model's performance is well-documented. The pattern of activity suggests a strong focus on improving user experience and accessibility of the model, as well as preparing the model for broader use cases.

Conclusions

The development team is actively working on improving the model's usability and addressing its limitations. The recent commits indicate a collaborative effort in enhancing documentation, deployment, and fine-tuning capabilities. The team's responsiveness to issues and their commitment to open-sourcing their work for academic research and limited commercial use is evident from their recent activities.

Link to the MiniCPM repository