Alpha version, LOCI2.0, for early access invitees
![](https://loci-dev.auroralabs.com/wp-content/uploads/2024/05/max_shirko_None_b20aa929-95e1-4cff-81b4-8d0306ac0b3e-2-1-1.png)
LOCI, AI Advisor Engineer for SW reliability and quality.
![](https://loci-dev.auroralabs.com/wp-content/uploads/2024/06/Screen-Shot-2024-06-11-at-19.15.16.png)
Our Investors
Your AI Advisor Engineer
Equipped with Aurora Labs LCLM, a Large Code Language model, that analyzes software artifacts and transforms complicated information into meaningful insights.
LOCI is the first AI Advisor Engineer with advanced prompting capabilities for software development. It detects emerging software anomalies and trends, provides guidance on the progress of project branches for seamless integration, and helps develop accurate, targeted tests, improving system quality, reliability, and compatibility.
LOCI extends the capabilities of GitHub Copilot. After coding with or without a Copilot-like tool, LOCI 2.0 advises on quality, reliability, and compatibility issues, enabling the developers and testers to meet the defined KPIs.
LOCI, Plan my test strategy
- LOCI helps you identify the most critical tests and symbols to focus on to plan your test strategy. This involves:
- Identifying Top Tests: Listing the top priority tests from your test Suite, sorted by their rank indicates their importance.
- Identifying Vulnerable Symbols: Listing the top highest vulnerable symbols sorted by their rank, which indicates their importance.
- By running the most important tests and writing tests for the most vulnerable symbols, you can ensure that your code is well-tested and potential issues are mitigated Faster Than Ever Before.
LOCI, Show me where to Optimize
- LOCI shows you where to optimize by identifying your codebase’s most significant duplicate code segments. These segments can be refactored to reduce redundancy and improve code maintainability.
LOCI, Let’s mitigate risk
- LOCI suggests mitigating risks, by focusing on identifying and addressing vulnerable symbols in your codebase. Vulnerable symbols are those that are untested by any of the current scenarios, making them crucial to identify and address to prevent potential issues.
![](https://loci-dev.auroralabs.com/wp-content/uploads/2024/06/Test-Strategy-final-full-screen.gif)
![](https://loci-dev.auroralabs.com/wp-content/uploads/2024/06/Optimize-final-full-screen.gif)
![](https://loci-dev.auroralabs.com/wp-content/uploads/2024/06/Risk-Mitigation-final-full-screen.gif)
LOCI For Refactoring
LOCI Reduces 20-30% engineering time per sprint!
LOCI delivers AI-powered code refactoring that adapts to your project, ensuring smarter, safer, and more efficient code changes. We tuned the model to be noncreative and stick to the SW language rules, and it adapts to your or the Organization’s coding style.
Full Transparency of the Redundant, similar, and duplicate code and segments
- Scan for Redundant, Cloned, and Similar segments
- Filtering per module, repos, and Segments
Automatically Refactoring Code According to Your Style
- Refactoring creativity monitored and ‘relaxed’
- Optimized for Embedded Systems
- Cross-repos and Projects with a revert mechanism
Fix manually in one time, one place and apply to all other similar segments
- Applying your own fix, and all similar code will be updated at once.
- Apply changes, with one click to all segments
- Address all warnings for every segment with a single click, in accordance with warnings from third-party tools.
![](https://loci-dev.auroralabs.com/wp-content/uploads/2024/06/1.1.-Full-transparency-of-redundant-code-v1.gif)
![](https://loci-dev.auroralabs.com/wp-content/uploads/2024/06/3.1.-Automatically-refactor-redundant-code-according-developer-style.gif)
![](https://loci-dev.auroralabs.com/wp-content/uploads/2024/06/4.1.-Fix-manually-similar-segments-warrning-security-warrning-bug.gif)
Start Your Journey with LOCI
Free, Pro and Enterprise. You can subscribe month-to-month or pay for a full year up-front for a 20% discount.
LOCI for Refactoring
Coming soon – June 24
LOCI AI Advisor
FREE
Non commercial and commercial usage
$ 0 USD PER USER
0$ per month / $0 USD per year
Pro
Full Package
$ 12 PER USER/MONTH
$ 120 PER USER/YEARLY
FREE
Non commercial and commercial usage
$ 0 USD PER USER
0$ per month / $0 USD per year
Pro
Full Package
$ 14 PER USER/MONTH
$ 140 PER USER/YEARLY
Frequently Asked Questions
- General
- Responsible AI
- Upcoming Features
Why is continuous refactoring essential, not just an option?
Continuous refactoring is not just an option but an essential practice in software development. It fosters code maintainability, quality, readability, performance, and adaptability while reducing technical debt and improving testability. By investing time and effort in refactoring, development teams can build software that is more robust, flexible, and sustainable over time.
Why should refactoring and optimization be a part of every sprint?
Incorporating refactoring and optimization tasks into every sprint ensures that the development process remains iterative, adaptive, and focused on delivering high-quality software that meets both technical and business requirements. It helps the team build and maintain a sustainable pace of development while continuously improving the overall product.
When is refactoring absolutely necessary?
Refactoring is absolutely necessary in various scenarios to maintain code quality, improve performance, enhance readability, and prepare the codebase for future changes and growth. It is an essential practice in software development that helps ensure the long-term success and sustainability of the software product.
How does LOCI help prepare code for compliance early on?
LOCI helps prepare code for compliance early on by setting the refactoring tone and style according to your project SW coding and conventions.
Aurora Labs has developed a responsible AI solution featuring a Large Code Language Model, leveraging transformer-based architecture to work seamlessly with Software CI/CD artifacts. This cutting-edge technology ensures complete data privacy, keeping all artifacts secure and confined within the customer’s premises, with no risk of leaks or external sharing.
From NLP for SW lines of code to Large Code Language Model
2016-2018
NLP, LSTM and ART/ Fuzzy art, Models, statisticals, 3diff
2018-2021
Added: Autoencoders, GNG, distributed algorithms, Clustering
2021-2022
Added: Transformer Model Graph based algorithms
A Large Code Language Model (our LCLM) and a Large Language Model (LLM) are both types of AI models designed to understand and generate human-like text. However, there are key differences between them:
Domain Specialization
- Our LCLM: Specifically tailored for understanding and generating refactoring code. It is optimized to handle programming languages, Bin Files , code syntax, and software development tasks.
- LLM: Designed for a broader range of text-based tasks, including natural language understanding and generation. It handles general language tasks like translation, summarization, and conversation.
Training Data
- Our LCLM: Trained predominantly on BIN, Tracing, source code, documentation, and other programming-related texts. The focus is on repositories, CI/CD artifacts,coding platforms, and technical manuals.
- LLM: Trained on a vast and diverse set of text data, including books, articles, websites, and other general language sources.
Use Cases
- Our LCLM: Trained predominantly on BIN, Tracing, source code, documentation, and other programming-related texts. The focus is on repositories, CI/CD artifacts,coding platforms, and technical manuals.
- LLM: Trained on a vast and diverse set of text data, including books, articles, websites, and other general language sources.
Model Architecture and Features
- Our LCLM: Features tailored teokenizers, x1000 smaller and vocabularies designed to handle the syntax and structure of programming languages efficiently. It may include specialized pipelines for Bin files, code analysis and generation.
- LLM: Uses general-purpose architectures and tokenizers that are effective for a broad range of language tasks.
In summary, while both LCLMs and LLMs leverage large-scale data and sophisticated architectures, LCLMs are specialized for code-related tasks and Bin Files, making them more precise and efficient in software development contexts.