Holistic AI Launches Open-Source Library to Advance Responsible AI

Holistic AI OSL provides the most comprehensive library for eliminating bias and improving explainability in AI systems available today

Oct. 22, 2024 – San Jose – (Business Wire) – Holistic AI, the leading AI governance platform for the enterprise, today announced the launch of Holistic AI OSL, an optimized open-source library designed to help developers build fair and responsible AI systems. AI architects and developers can now access the library, which provides advanced tools for eliminating bias and improving explainability. Holistic AI OSL empowers teams to create more transparent and trustworthy AI applications from the ground up, fostering a safer environment of innovation and experimentation to benefit society. For more information, visit the Holistic AI blog or download the library for Python, which is available today free of charge without any licensing requirements.

Organizations increasingly rely on AI systems in critical areas such as recruitment and onboarding, healthcare, loan approval and credit scoring, where fairness is paramount. It is essential to ensure that algorithms do not inadvertently discriminate, ensuring equal treatment for demographic groups and individuals. While AI has made significant advances in prediction accuracy, recent studies indicate that 65% of AI researchers and developers still identify bias as a major issue.

Holistic AI OSL tackles this challenge by providing tools that address the five key technical risks associated with AI systems, ensuring greater accountability. Specifically, OSL offers:

  • Bias Mitigation: Introduces over 35 bias metrics across five machine learning tasks and provides 30 strategies to help developers eliminate bias in their systems.
  • Explainability: Defines the system’s behavior by revealing how models make decisions and predictions, fostering transparency and building trust.
  • Robustness: Ensures models perform consistently, even when faced with challenges like adversarial attacks or variations to input data.
  • Security: Provides safeguards for user privacy through anonymization and defends against risks like attribute inference attacks, enhancing overall security.
  • Efficacy: Ensures models are not only accurate but maintain fairness, robustness, and security under various conditions, balancing these factors through detailed testing in real-world scenarios.

“Our new library equips organizations with tools for all AI risks, including explainability, robustness, and bias. It supports measurement, reporting, and mitigation at every stage of the AI lifecycle, offering one of the most advanced solutions for improving quality in AI applications today,”

said Adriano Koshiyama, Co-CEO of Holistic AI.

“Our goal is to help AI realize its full potential. Whether through this open-source library or our comprehensive AI governance platform, we are committed to empowering businesses to accelerate AI innovation across their enterprise—enabling them to complete more projects successfully without facing risks, compliance issues, or bias, all while tracking against the expected ROI.”

As one of the top global insurers operating in almost 40 countries across five continents and serving over 30 million customers worldwide, MAPFRE is leveraging AI as part of its innovation strategy around continuous improvement of its customer experiences, processes, and operations. Holistic AI OSL, as well as the full Holistic AI Governance Platform, are a part of MAPFRE’s technology line up.

“What sets this library apart is its depth—it’s not just about identifying AI risks but actively addressing them with proven, industry-ready mitigation techniques, making it an essential part of any ethical AI development toolkit,”

said César Ortega, Expert Data Scientist at MAPFRE.

About Holistic AI

Founded in 2020, Holistic AI’s mission is to empower enterprises to adopt and scale AI with confidence. Our purpose-built AI governance platform helps companies accelerate AI transformation across the organization – transparently, responsibly, and with ROI accountability for the C-Suite. With Holistic AI, businesses can increase visibility and control of AI projects, eliminate communication bottlenecks across teams, and significantly reduce AI risk at enterprise scale. Holistic AI is part of Microsoft’s Founders’ Hub, Pegasus Program, and Nvidia’s Inception program. Holistic AI founders are active members of the, experts on the UN AI Advisory Body, members of OECD’s Network of Experts on AI, advisors on the EU AI Act, and collaborators to the Alan Turin Institute.

For more information, see NIST AI Safety Institute, experts on the UN AI Advisory Body, members of OECD’s Network of Experts on AI, advisors on the EU AI Act, and collaborators to the Alan Turin Institute. For more information, see www.holisticai.com.

CoreStack Makes the Inc. 5000 List of America’s Fastest Growing Private Companies Two Years Running

NextGen Cloud Governance Company Places 12th Among the Fastest Growing Companies in the Seattle Area

BELLEVUE, WA — August 13, 2024 — CoreStack, a global multi-cloud governance provider, is proud to announce it has made the coveted Inc. 5000 list for the second year in a row. This list recognizes CoreStack as one of the fastest-growing private companies in the U.S., reflecting its dramatic growth and outsized influence within the cloud industry.

The Inc. 5000 class of 2024 represents companies that have driven rapid revenue growth despite strong economic headwinds. Of the 5,000 fastest-growing private companies on Inc.’s 2024 list, CoreStack ranks as No. 1013. The company ranks No. 121 in the Software category and No. 12 in the Seattle area.

Inc.’s annual ranking provides a data-driven look at the most successful companies within the economy’s most dynamic segment—its independent, entrepreneurial businesses. This year’s Inc. 5000 companies have added 874,458 jobs to the economy over the past three years.

“One of the greatest joys of my job is going through the Inc. 5000 list,” says Mike Hofman, who recently joined Inc. as editor-in-chief. “Congratulations to this year’s honorees for growing their businesses fast despite the economic disruption we all faced over the past three years.”

For complete results of the Inc. 5000, including company profiles and an interactive database that can be sorted by industry, location, and other criteria, go to www.inc.com/inc5000. The top 500 companies are featured in the September issue of Inc. magazine, available on newsstands beginning Tuesday, August 20.

“It is indeed an honor to make Inc.’s list of fastest-growing companies two years in a row,” says CoreStack’s CEO, Ezhilarasan (Ez) Natarajan. “This recognition reflects not only our robust growth but also the transformative value our cloud governance technology continues to deliver to partners and customers.”

Earlier this year, Inc. revealed that CoreStack ranked No. 41 on the Inc. 5000 Regionals: Pacific list, the most prestigious ranking of the fastest-growing private companies in the Pacific region, including California, Oregon, Washington, Alaska, and Hawaii. CoreStack ranked No. 42 on the Regionals: Pacific list in 2023. Inc. has also recognized the company as a Best Workplace for the last three years.

CoreStack offers a suite of NextGen Cloud Governance modules that leverage AI to provide continuous and autonomous multi-cloud governance through a unified dashboard for FinOps, SecOps, and CloudOps. NextGen Cloud Governance helps enterprises mitigate risk, accelerate delivery, optimize performance, and innovate faster. In addition, CoreStack offers assessments based on Well-Architected and custom frameworks. This solution streamlines the process of evaluating, improving, and maintaining cloud workloads across all environments.

What is DataOps

Data workflows today have grown increasingly intricate, diverse, and interconnected. Leaders in data and analytics (D&A) are looking for tools that streamline operations and minimize the reliance on custom solutions and manual steps in managing data pipelines.

DataOps is a framework that brings together data engineering and data science teams to address an organization’s data requirements. It adopts an automation-driven approach to the development and scaling of data products. This approach also streamlines the work of data engineering teams, enabling them to provide other stakeholders with dependable data for informed decision-making.

Initially pioneered by data-driven companies who used CI/CD principles and even developed open-source tools to improve data teams—DataOps has steadily gained traction. Today, data teams of all sizes increasingly rely on DataOps as a framework for quickly deploying data pipelines while ensuring the data remains reliable and readily accessible.

Gartner defines DataOps as —
A collaborative data management practice focused on improving the communication, integration and automation of data flows between data managers and data consumers across an organization.

Why DataOps is Important

Manual data management tasks can be both time-consuming and inefficient, especially as businesses evolve and demand greater flexibility. A streamlined approach to data management, from collection through to delivery, allows organizations to adapt quickly while handling growing data volumes and building data products.

DataOps tackles these challenges by bridging the gap between data producers (upstream) and consumers (downstream). By integrating data across departments, DataOps promotes collaboration, giving teams the ability to access and analyze data to meet their unique needs. This approach improves data speed, reliability, quality, and governance, leading to more insightful and timely analysis.

In a DataOps model, cross-functional teams—including data scientists, engineers, analysts, IT, and business stakeholders—work together to achieve business objectives.

DataOps vs DevOps: Is There a Difference?

Although DevOps and DataOps sound similar, they serve distinct functions within organizations. While both emphasize collaboration and automation, their focus areas are different: DevOps is centered around optimizing software development and deployment, whereas DataOps focuses on ensuring data quality and accessibility throughout its lifecycle.

The DataOps Framework

The DataOps framework joins together different methodologies and practices to improve data management and analytics workflows within organizations. It consists of five key components:

1. Data Orchestration

Data orchestration automates the arrangement and management of data processes, ensuring seamless collection, processing, and delivery across systems. Key elements include:

  • Workflow automation: Automates scheduling and execution of data tasks to enhance efficiency.
  • Data integration: Combines data from diverse sources into a unified view for consistency and accessibility.
  • Error handling: Detects and resolves errors during data processing to maintain integrity.
  • Scalability: Adapts to increasing data volumes and complexity without compromising performance.

2. Data Governance

Data governance establishes policies and standards that guarantee the accuracy, quality, and security of data, facilitating effective management of structured data assets. Key elements include:

  • Data quality management: Ensures data is accurate, complete, and reliable.
  • Data security: Protects data from unauthorized access and breaches through various measures.
  • Data lineage: Tracks the origin and transformation of data for transparency.
  • Compliance: Ensures adherence to regulatory requirements and industry standards, such as GDPR and HIPAA.

3. Continuous Integration and Continuous Deployment (CI/CD)

The CI/CD (Continuous Integration/Continuous Deployment) practices automates testing, integration, and deployment of data applications, enhancing responsiveness. Key elements include:

  • Continuous integration: Merges code changes into a shared repository with automated testing for early issue detection.
  • Continuous deployment: Automates deployment of tested code to production environments.
  • Automated testing: Includes various tests to ensure the correctness of data applications.

4. Data Observability

Data observability involves ongoing monitoring and analysis of data systems to proactively detect and address issues, delivering visibility into data workflows. Key elements include:

  • Monitoring: Tracks the health and performance of data pipelines and applications.
  • Alerting: Notifies teams of anomalies or performance issues in real time.
  • Metrics and dashboards: Detects and resolves errors during data processing to maintain integrity.
  • Scalability: Provides visual insights into key performance indicators (KPIs).

5. Automation

Automation minimizes manual intervention by utilizing tools and scripts to perform repetitive tasks, enhancing efficiency and accuracy in data processing. Key elements include:

  • Task automation: Automates routine tasks like ETL and reporting.
  • Workflow automation: Streamlines complex workflows using dependencies and scheduling.
  • Self-service: Enables users to access and analyze data independently through user-friendly interfaces.

How Does DataOps Works

DataOps primarily consists of the following four processes:

  1. Data Integration: This process aims to create a cohesive view of fragmented and distributed organizational data through seamless, automated, and scalable data pipelines. The objective is to efficiently locate and integrate the appropriate data without sacrificing context or accuracy.
  2. Data Management: This implies automating and optimizing data processes and workflows from creation to distribution, throughout the entire data lifecycle. Agility and responsiveness are essential for effective DataOps.
  3. Data Analytics Development: This process facilitates rapid and scalable data insights by utilizing optimal, reusable analytics models, user-friendly data visualizations, and continuous innovation to enhance data models over time.
  4. Data Delivery: The goal here is to ensure that all business users can access data when it is most needed. This extends beyond just efficient storage; it emphasizes timely data access with democratized self-service options for users.

In practice, the key phases in a DataOps lifecycle include:

  • Planning: Collaborating with teams to set KPIs and SLAs for data quality and availability.
  • Development: Building data products and machine learning models.
  • Integration: Incorporating code or data products into existing systems.
  • Testing: Verifying data against business logic and operational thresholds.
  • Release: Deploying data into a test environment.
  • Deployment: Merging data into production.
  • Operate: Running data in applications to fuel ML models.
  • Monitor: Continuously checking for anomalies in data.

This iterative cycle promotes collaboration, enabling data teams to effectively identify and prevent data quality issues by applying DevOps principles to data pipelines.

Who Owns DataOps

DataOps teams usually incorporate temporary stakeholders throughout the sprint process. However, each DataOps team relies on a core group of permanent data professionals, which typically includes:

  1. The Executive (CDO, CTO, etc.): This leader guides the team in delivering business-ready data for consumers and leadership. They ensure the security, quality, governance, and lifecycle management of all data products.
  2. The Data Steward: Responsible for establishing a data governance framework within the organization, the data steward manages data ingestion, storage, processing, and transmission. This framework serves as the foundation of the DataOps initiative.
  3. The Data Quality Analyst: Focused on enhancing the quality and reliability of data, the data quality analyst ensures that higher data quality leads to improved results and decision-making for consumers.
  4. The Data Engineer: The data engineer constructs, deploys, and maintains the organization’s data infrastructure, which includes all data pipelines and SQL transformations. This infrastructure is crucial for ingesting, transforming, and delivering data from source systems to the appropriate stakeholders.
  5. The Data/BI Analyst: This role involves manipulating, modeling, and visualizing data for consumers. The data/BI analyst interprets data to help stakeholders make informed strategic business decisions.
  6. The Data Scientist: Tasked with producing advanced analytics and predictive insights, the data scientist enables stakeholders to enhance their decision-making processes through enriched insights.

Benefits of DataOps

Adopting a DataOps solution offers numerous benefits:

1. Improved Data Quality

DataOps enhances data quality by automating traditionally manual and error-prone tasks like cleansing, transformation, and enrichment. This is crucial in industries where accurate data is vital for decision-making. By providing visibility throughout the data lifecycle, DataOps helps identify issues early, enabling organizations to make faster, more confident decisions.

2. Faster Analytics Deployment

Successful DataOps implementation can significantly decrease the frequency of late data analytics product deliveries. DataOps accelerates analytics deployment by automating provisioning, configuration, and deployment tasks, which reduces the need for manual coding. This allows data engineers and analysts to quickly iterate solutions, resulting in faster application rollouts and a competitive edge.

3. Enhanced Communication and Collaboration

DataOps fosters better communication and collaboration among teams by centralizing data access. This facilitates cross-team collaboration and improves the efficiency of releasing new analytics developments. By automating data-related tasks, teams can focus on higher-level activities, such as innovation and collaboration, leading to better utilization of data resources.

4. More Reliable and Efficient Data Pipeline

DataOps creates a more robust and faster data pipeline by automating data ingestion, warehousing, and processing tasks, which reduces human error. It improves pipeline efficiency by providing tools for management and monitoring, allowing engineers to proactively address issues.

5. Easier Access to Archived Data

DataOps simplifies access to archived data through a centralized repository, making it easy to query data compliantly and automating the archiving process to enhance efficiency and reduce costs. DataOps also promotes data democratization by making vetted, governed data accessible to a broader range of users, optimizing operations and improving customer experiences.

Best Practices for DataOps

Data and analytics (D&A) leaders should adopt DataOps practices to overcome the technical and organizational barriers that slow down data delivery across their organizations. As businesses evolve rapidly, there is an increasing need for reliable data among various consumer personas, such as data scientists and business leaders. This has heightened the demand for trusted, decision-quality data.

DataOps begins with cleaning raw data and establishing a technology infrastructure to make it accessible. Once implemented, collaboration between business and data teams becomes essential. DataOps fosters open communication and encourages agile methodologies by breaking down data processes into smaller, manageable tasks. Automation streamlines data pipelines, minimizing human error.

Building a data-driven culture is also vital. Investing in data literacy empowers users to leverage data effectively, creating a continuous feedback loop that enhances data quality and prioritizes infrastructure improvements. Treating data as a product requires stakeholder involvement to align on KPIs and develop service level agreements (SLAs) early in the process. This ensures focus on what constitutes good data quality within the organization.

To successfully implement DataOps, keep the following best practices in mind:

  1. Define data standards early: Establish clear semantic rules for data and metadata.
  2. Assemble a diverse team: Build a team with various technical skills.
  3. Automate for efficiency: Use data science and BI tools to automate processing.
  4. Break silos: Encourage communication and utilize integration tools.
  5. Design for scalability: Create a data pipeline that adapts to growing data volumes.
  6. Build in validation: Continuously validate data quality through feedback loops.
  7. Experiment safely: Use disposable environments for safe testing.
  8. Embrace continuous improvement: Focus on ongoing efficiency enhancements.
  9. Measure progress: Establish benchmarks and track performance throughout the data lifecycle.

By treating data like a product, organizations can ensure accurate, reliable insights to drive decision-making.

Conclusion

By automating tasks, enhancing communication and collaboration, establishing more reliable and efficient data pipelines, and facilitating easier access to archived data, DataOps can significantly improve an organization’s overall performance.

However, it’s important to note that DataOps is not a one-size-fits-all solution; it won’t automatically resolve all data-related challenges within an organization.

Nevertheless, when implemented effectively, a DataOps solution can enhance your organization’s performance and help sustain its competitive advantage.

SQL Server lifecycle and considerations for enterprises

SQL Server is one of the most versatile databases which enterprises trust for their database workloads. It’s a traditional Online Transactional Processing (OLTP) database and over the years enterprises across different industry verticals like financial sector, healthcare, media and entertainment, manufacturing, insurance etc. have built plethora of applications using SQL Server. Every few years, Microsoft releases a new version of SQL Server (like 2014, 2016, 2017, 2019, 2022 editions) with new feature enhancements which make the product more secure, more compliant and with a performant database engine coping up with the growing needs of enterprise data. I’ve spent several years in the core SQL Server product team and can proudly vouch the rigorous testing’s which are done on the product prior to any release. SQL Server engineering and product teams have been known across the industry for their decades of engineering excellence in delivering such a robust engine impacting millions of customers worldwide.

Each version of SQL Server is backed by a minimum of 10 years support, which includes five years in mainstream support (includes functional, performance, scalability, and security updates), and five years in extended support (only security updates). For customers who are nearing their 10 years on a particular version they choose to either migrate to the cloud into Azure SQL, or to an Azure Virtual Machine for free extended security updates, or upgrade to a more recent version of SQL Server or purchase and extended security updates subscription with Microsoft. Enterprise customers typically choose to remain in n-1 or n-2 (n being the latest version) version of the product and prior to the 10 years end of life has to choose one of the options mentioned above. Several enterprise customers for their critical workloads and for business reasons need to remain on-premises and cannot move to the cloud. For them, they are tasked with migrating to the latest version of SQL Server along with upgrading their physical hardware. Recently July 9th, 2024, marked the end-of-life support for SQL Server 2014. On-premises customers will need to move to a recent version of SQL Server and also upgrade the necessary hardware to meet the system requirements. This involves significant cost and planning for enterprises.

Customers have built applications on SQL Server and most of these applications demand some form of reporting and Machine Learning capabilities on the data stored in SQL Server. Customers use SQL Server Machine Learning Services, launched in SQL Server 2016 with R support and 2017 with Python support to run any ML capabilities within the SQL Server database instances. However, when using the ML services the R or Python code is wrapped within an sp_execute_external_script stored procedure in T-SQL and customers miss getting any IntelliSense and debugging capabilities. I’ve seen instances where data scientists query the SQL instances and pull the data outside SQL Server to create their ML models and then store these ML models as binary object within SQL Server and then score against it. In this approach, the moment the data is pulled outside SQL Server the trust boundaries of the data are lost and customer data is potentially exposed to more surface areas for attack.

Now in 2024, we see a new advent of workloads where enterprise customers are trying to enable GenAI capabilities over their databases. Enterprises are either trying to improve efficiency for their customers to find information correctly or improve the overall experience of their applications. For outwards facing use cases, customers want to have capabilities like enterprise search on their data and replacing current drop downs and filters in their applications to just providing a simple search like experience for their customers to ask questions in natural language and get responses from their databases.

From both at my time in Microsoft and Amazon, I’ve seen BI teams being randomized with constant questions which leadership team asks on the data, and every time a new ad-hoc report gets created and enterprises end up creating hundreds of reports wasting both time and resources. We observe internal facing use cases where customers ask ad-hoc questions over their database instances and replacing manually created reports over their SQL instances in SSRS and PowerBI with asking questions in natural language. Imagine if enterprises had a natural language search bar for leadership to ask questions on their database instances which showed them all the results across thousands of tables.

In Tursio, we are turning SQL Servers into GenAI machines. Enterprise customers running SQL Server instances anywhere — on-premises (yes you heard, right !) and in the cloud can get an in-situ GenAI solution using Tursio. Tursio can be deployed entirely on-premises (without any cloud connectivity) where enterprise customers can ask questions in natural language and get responses from within their databases. All the data modeling happens inside SQL Server instances and there is zero data movement. None of the data ever leaves your SQL Server. Tursio understands the ontology of the data and as the underlying data changes the models are constantly refreshed providing customers with the accurate and updated results from the database whenever the question has been asked. Enterprises can invoke the same search bar using a simple Rest API endpoint from within their applications. Tursio tries to look beyond just answering questions which enterprises are asking but what value they are seeking once they get the answer — Are customers trying to predict demand? Are customers trying to find anomalies? Are customers trying to forecast? Are they trying to classify? Customers using the Tursio platform get predictive insights from the data allowing them to effectively make business decisions faster and improve time to value and all within 3 seconds. Customers can define their own KPI’s and Tursio constantly learns and fine tunes the data models providing accurate results from the data models created.

If you are a SQL Server customer and want to turbo charge your applications with GenAI capabilities without your data ever leaving SQL Server, feel free to drop a note below. In addition to SQL Server and Azure SQL, Tursio platform also supports additional databases and data warehouses like Microsoft Fabric, AWS Redshift, Snowflake, Google BigQuery, Teradata, PostgreSQL, MySQL etc. Here are some teaser screenshots of bringing generative AI to your data:

Example 1. Enterprise Search Questions using Tursio

Example 2. Understanding business KPIs using Tursio

Example 3. Analytical Questions using Tursio

Why & How Wayfair Migrated from OPA to Kyverno

Wayfair, a leading e-commerce platform in the Home Goods market, recently undertook a significant migration in its Kubernetes environment, transitioning from OPA (Open Policy Agent) to Kyverno. With around 14,000 employees, 2,000 engineers, and a substantial presence on Google Kubernetes Engine (GKE), Wayfair processes approximately 15,000 production deploys each month, emphasizing the scale and complexity of its operations.

In a recent presentation by Zach Swanson at Wayfair, key insights were shared about the Kubernetes infrastructure at Wayfair as well as their Kyverno adoption journey. They run large multi-tenant clusters to accommodate their extensive developer community, treating each developer group as an isolated tenant. The use of Kyverno admission policies has become integral, managing around 56 validate rules and 20 mutate policies across the clusters. This approach allows Wayfair to protect its platform, preventing potential issues like misrouted traffic, insecure ingress configurations, and inadvertent resource mismanagement.

Kyverno Use Cases

The utilization of Kyverno at Wayfair falls into two broad categories. Firstly, Kyverno is employed to protect the platform. Beyond standard pod security, it is used to prevent various scenarios, such as unauthorized changes to ingress hosts, TLS declarations, and the enabling of features that could complicate issue tracking. Secondly, Kyverno is instrumental in seamlessly evolving the platform without requiring developers to make extensive changes. This involves the automatic adjustment of deprecated configurations, image registry failovers, and enhancements to resource efficiency, resulting in significant cost savings.

Reasons for Migrating to Kyverno

Wayfair’s decision to migrate from OPA to Kyverno was driven by several compelling factors. OPA’s Rego language, while powerful, posed challenges in terms of complexity, especially in comparison to Kyverno. Documentation gaps and subtle differences between Gatekeeper (OPA-based) and OPA itself further contributed to the decision. Notably, Wayfair lacked a centralized policy team, and the versatility of Kyverno allowed them to adopt a more streamlined approach. The Kyverno community’s responsiveness, coupled with an extensive public policy library, further solidified the benefits of the migration.

Migration Process

The migration process at Wayfair was a well-structured and methodical approach. It began with a crucial concept demo, showcasing Kyverno’s ability to handle complex constraints. Subsequently, Gatekeeper constraints were systematically retooled into Kyverno policies, with parallel deployment and confidence-building through testing utilities. Policies were transitioned from auditing to enforcing mode, ensuring alignment with existing Gatekeeper policies. The gradual disabling of Gatekeeper constraints marked the successful completion of the migration, emphasizing the straightforward nature of transitioning from OPA to Kyverno.

Summary

Wayfair’s migration from OPA to Kyverno reflects a strategic move to enhance the manageability, simplicity, and responsiveness of their Kubernetes environment. The shift not only addressed challenges associated with OPA but also empowered Wayfair to seamlessly adapt its platform, safeguard against potential issues, and significantly reduce resource allocation. This case study serves as valuable insight for organizations considering a similar transition, highlighting the benefits of Kyverno in managing Kubernetes policies at scale.

Are you interested in learning more about how to secure your Kubernetes clusters using Kyverno? Check out this ebook: Securing Kubernetes Using Policy-as-code powered by Kyverno

Day 2 Kubernetes Gets a Serious Boost

Why Kyverno and PMK from Nirmata are game changers and why we invested in Nirmata?

At Z5, we are constantly looking for companies that are at the cusp of business inflection and have breakout potential. We met one such company earlier this year. Founded by Jim, Ritesh, and Damien, Nirmata is the creator of Kyverno, an open source policy engine designed for Kubernetes, which has been growing very rapidly – including having more than 6 million downloads in less than six months, over 1,200 GitHub stars, and an engaged and vibrant user community.

Kubernetes (aka K8s) is an open-source container-orchestration system for automating application deployment, scaling, and management. Originally developed by Google, Kubernetes is now under the Cloud Native Computing Foundation umbrella, and is the Cloud Native Operating System for managing containerized applications.

Day 2: Meeting the Challenge for production Kubernetes

As enterprises adopt cloud native technologies and specifically Kubernetes, the transition from Day 0: Design and Development, to Day 1: Configuration and Deployment and subsequently to Day 2: Governance, Compliance and Automation can be very challenging and can slow down adoption. Kubernetes is easy to spin up but is not secure by default. This is why Kyverno, a policy engine specifically designed for Kubernetes has found rapid adoption within the developer community.

Kyverno to the Rescue

Using Kyverno, policies are managed as Kubernetes resources, and no new language is required to write policies. Kyverno policies can validate, mutate, and generate Kubernetes resources. Further, Kyverno is simple, elegant, and easy to scale, and thus provides enterprises the governance, compliance and automation solution they need for production Kubernetes.

Propelled by strong traction and downloads, Kyverno is well on its way to becoming the de facto
policy management engine for Kubernetes.

PMK and Beyond
Enterprises looking for a deployment ready solution with pre-built policies and governance and compliance features, will find the Policy Manager for Kyverno (PMK), currently in beta, a perfect solution for hardening their production Kubernetes.

We’re excited with the roadmap for Nirmata and what lies ahead and are looking forward to working with Jim, Ritesh, Damien, Anubhav and the rest of the Nirmata team as they continue to build game changing products for production Kubernetes.

Nirmata Raises $4M in Pre-Series A Funding to Capitalize on the Full Potential of Kubernetes Native Policy Management, Kyverno

Launched in early 2021, the open-source project generated over six million downloads; New investment to accelerate adoption by supporting the global Kyverno community, establishing new engineering team in India and delivering solutions around Kyverno.

Nirmata, the software solutions provider for governance, compliance, security, and automation of production Kubernetes workloads and clusters, and creators of Kyverno, the leading policy engine designed for Kubernetes, today announced it has raised $4.0 million in pre-series A funding to further accelerate the growth of Kyverno. The new investment was led by Z5 Capital with participation from Uncorrelated Ventures, Samsung Next, Benhamou Global Ventures (BGV) and angel investors Saqib Syed and BV Jagadeesh.

This funding builds upon an exceptional year for Nirmata and comes as Kyverno achieved considerable growth punctuated by the increased adoption of open source. Since the beginning of 2021, Kyverno’s adoption quickly soared to over six million downloads, with a growing number of users including Novartis, The New York Times, Duke Energy, TriNet, Grofers and others. It is used by open source projects like Flux, KubeArmor and others. In May 2021, Nirmata Policy Manager for Kyverno (PMK) was launched to streamline the adoption of Kyverno across multiple clusters as well as facilitate Policy-as-Code best practices by enabling the deployment of Kyverno policies across fleets of clusters using GitOps workflows.

With this new investment, the company will scale its product and operations to support the Kyverno community and establish an engineering team in India as well as grow its sales and marketing to accelerate its adoption.

Key Milestones

  • Accepted by the Cloud Native Computing Foundation as a sandbox project in November 2020
  • More than 6 million downloads in less than six months
  • More than 1,200 GitHub stars

As containers are spun up quickly, the growing demands placed upon developers often leaves a security gap that exposes potential threats and risks in the configuration settings. In 2018 and 2019 exposed breaches caused by cloud misconfigurations resulted in nearly 33.4 billion records in total. According to the Ponemon Institute’s 2019 report, the average cost per lost record globally is $150. Multiplied by the number of records exposed, misconfigurations cost companies worldwide nearly $5 trillion in 2018 and 2019 alone. As enterprises accelerate their adoption of cloud technologies, Nirmata’s Kyverno is providing the essential method with native tools and language to secure containers for enterprises deploying resources in cloud environments.

“Kubernetes gives a lot of flexibility in the way that workloads are deployed. Yet developers may not know 80% of what needs to be configured, nor should they. Kyverno gives users the ability to focus on what matters – their workloads and applications – by aiding the adoption of Kubernetes policies rather than requiring users to learn and adopt new ones,” said Jim Bugwadia, Chief Executive Officer, Nirmata. “We’re at the cutting edge of this innovation and are thrilled to be working with our partners at Z5 Capital, Benhamou Global Ventures, Uncorrelated Ventures, Samsung Next and BV Jagadeesh, Saqib Syed to accelerate the execution of our vision.”

Nirmata’s mission is to enable the automated management of cloud native applications in an infrastructure agnostic manner. To achieve this, policy-based management is critical for achieving autonomy across roles while keeping alignment to organizational goals and standards.

Analysts Underscore the Widespread Adoption of Kubernetes
“Our survey research indicates Kubernetes use continues to grow, with more than 20% of enterprise organizations that have deployed applications to production in the last year indicating Kubernetes is fully deployed across all of their IT organization and another 32% reporting some adoption at team level.” — Jay Lyman, Senior Research Analyst, 451 Research

Investors Highlight the Critical Need to Manage Cloud Application Changes Using Security Policies
“As enterprises adopt cloud applications and Kubernetes, applying and managing security policies is becoming increasingly challenging. With Kyverno and Policy Manager for Kyverno (PMK), Nirmata offers a simple and elegant Kubernetes Native Policy management approach to secure cloud applications. We are excited to support Nirmata in its mission to help customers solve their Kubernetes governance, compliance and automation challenges.” — Arun Ramamoorthy, Founding Partner, Z5 Capital

“When certain infrastructure becomes pervasive and dominant like Kubernetes has, critical services need to be native, open source, and standards-based. Policy management is one such critical service, and Kyverno is a beautiful open-source CNCF-endorsed native solution that allows policies to be managed as Kubernetes resources.” — Salil Deshpande, General Partner at Uncorrelated Ventures

“Nirmata is well positioned to emerge as the go-to provider for security, automation and operations of Kubernetes workloads and clusters, facilitating application deployments and management for enterprise technology companies around the world. This presents a natural fit with BGV’s Enterprise 4.0 investment thesis, and the Nirmata Policy Manager for Kyverno has gained remarkable traction in recent months. We’re bullish on this space and very excited at Nirmata’s prospects for accelerated growth in the year’s ahead.” — Yashwanth Hemaraj, Partner, Benhamou Global Ventures (BGV)

For more information about Nirmata’s Kyverno, please visit http://www.nirmata.com.

About Nirmata, Inc.
Nirmata, the creator of Kyverno, provides open source and commercial enterprise solutions for governance, compliance, security, and automation of production Kubernetes workloads. Nirmata enables self-service cluster provisioning, provides DevOps teams visibility, health, metrics, and alerts, ensures compliance via workload policies, and streamlines application, deployments across Kubernetes clusters deployed on any cloud, data center, or edge. For more information, visit us at https://www.nirmata.com. You can also follow Nirmata on GitHub, Twitter, Facebook, and LinkedIn.

About Z5 Capital
Z5 Capital is an early stage enterprise focused venture capital fund based in Palo Alto that works closely with entrepreneurs to help build standout companies. The Z5 approach involves deep engagement with companies through a combination of partnering, mentoring, and collaborating to help them solve challenges associated with go to market and scale. Visit https://z5capital.com/ to learn more.

About Samsung Next
Samsung Next is an investment group that champions bold and ambitious founders. Next helps Samsung shape the future by identifying the technologies, trends, and ideas that matter. The team focuses broadly on the technology areas of AI, blockchain, fintech, healthtech, infrastructure, and mediatech, but invests opportunistically in founders pursuing the imagined and impossible. Visit https://www.samsungnext.com/ to learn more.

About Uncorrelated Ventures
Bain-backed Uncorrelated Ventures was founded by Salil Deshpande to focus on open source and infrastructure software, both traditional and decentralized. Over 14 years as general partner and managing director at Bay Partners and Bain Capital, Salil invested $350M+ into 50+ companies early, including MuleSoft, DynaTrace, Buddy Media, SpringSource, Redis Labs, Jambool, Dropcam, Tealium, Sonatype, Frame, DataStax, Netdata, Quantum Metric, Philz Coffee, Upgrade and DeFi projects Compound and Maker. Salil was on the Forbes Midas List of the 100 best-performing venture investors worldwide in 2013, 2014, 2015, 2016, 2017, 2018, and 2019. Visit https://uncorrelated.com/ to learn more.

About BGV
BGV is a venture capital firm with deep Silicon Valley roots and an exclusive focus on global Enterprise 4.0 technology innovation. We source companies from innovation hubs around the world and deploy our financial and human capital from seed stage to IPO. Founded by Eric Benhamou, former chairman and CEO of 3Com, Palm and co-founder of Bridge Communications, BGV is comprised of global operating executives and investors, and is often the first and most active institutional investor in our portfolio companies. Our management team leverages deep operational expertise and an extensive network of technical advisors, executives and functional experts to actively engage and support our start-up entrepreneurs. With offices in Palo Alto, Tel Aviv and Paris, BGV has championed a cross-border venture investing model with a portfolio representing businesses in the US, Israel, Europe and India. Visit http://www.bgv.vc to learn more.

ArmorCode Emerges From Stealth With $3 Million in Seed Funding to Redefine Application Security

Intelligent Application Security Platform Offers Consolidated Application Security Posture Management, DevSecOps Orchestration, and Continuous Compliance

Palo Alto, Calif.May 13, 2021  ArmorCode, the Silicon Valley startup delivering application security at the speed of DevOps, today announced it has secured $3 million in seed financing led bySierra Ventures with participation fromTau Ventures andZ5 Capital and individual investors including industry leaders Andreas Kuehlmann (CEO, Tortuga Logic; former security executive at Synopsys) and Prithvi Rai (former Sr. Director of Security at Uber, Facebook, and Yahoo!). Enterprises use ArmorCode to consolidate application security tooling, streamline application security processes, increase business agility, and improve developer productivity.

“Application development has changed radically: from waterfall to agile development and from monolithic application architecture to microservices delivered at the edge. Once-a-year compliance is no longer sufficient as releases are done on a weekly or even daily cadence. However, application security and compliance tools haven’t kept up,” said Teza Mukkavilli, Chief Security Officer at ChargePoint, an ArmorCode customer. “My team was able to onboard the ArmorCode platform in less than 15 minutes and saw tremendous time to value.”

ArmorCode was founded in July 2020 by CEO Nikhil Gupta — the former VMware and Cisco executive most known for founding Avid Secure, an AI-powered enterprise cloud security posture management company acquired by Sophos — and seasoned CTO Anant Misra to help companies take charge of increasingly-complicated application security environments. According to Gartner, application security is one of the top three fastest-growing segments within cybersecurity.

“We have received consistent feedback from the CISO group of our CXO Advisory Board that they are overwhelmed by the volume and complexity of application security alerts,” said Mark Fernandes, Managing Partner, Sierra Ventures, who has invested in many successful security companies like Sourcefire and RedLock. “ArmorCode is the most comprehensive solution in the space and the founding team has very relevant startup experience to tackle this significant problem. The rapid early bottoms-up customer adoption is validating our thesis.”

The transition to agile development, the rise of microservices, and an increased reliance on cloud services for business operations due to the pandemic have contributed to an explosion in software development and a dramatic reduction in software delivery time. As the speed and complexity of application development skyrockets, application security professionals increasingly find themselves unable to keep up — and many are forced to piece together security tools as stopgaps. Gartner recentlyfound that “78% of CISOs have 16 or more tools in their cybersecurity vendor portfolio and 12% have 46 or more. Too many security vendors result in complex security operations and increased security headcount. Most organizations recognize vendor consolidation as an avenue for reduced costs and better security, with 80% of organizations interested in vendor consolidation strategy.”

In addition, cybercrime is increasing partly due to the pandemic: a global survey of 1,000 CXOs revealed that 90% experienced an increase in cyberattacks due to the pandemic and 93% said they were forced to delay key security projects in order to manage the transition to remote work. Cybersecurity Ventures predicts cybercrime damages will total $6 trillion globally in 2021 — or $190,000 every second — and will grow by 15 percent per year over the next five years, reaching $10.5 trillion annually by 2025.

ArmorCode is a next-generation application security platform that consolidates three key AppSec needs into a single intelligent platform that minimizes tooling and alerts while maximizing agility, efficiency, and cost-effectiveness. The ArmorCode platform includes:

Application Security Posture Management

  • Simplifies AppSec operations by providing a centralized view of all security findings across application and infrastructure security and enables a streamlined CI/CD pipeline
  • Reduces the risk of security incidents by as much as 50% by normalizing, prioritizing, and correlating findings across various AppSec and infrastructure security tools

DevSecOps Orchestration

  • Offers a seamless DevSecOps workflow that fosters tighter collaboration between developers and AppSec engineers with 60+ integrations across leading AppSec, CI/CD, collaboration, and infrastructure security tools

Continuous Compliance

  • Out-of-the-box industry-standard compliances including SOC2, GDPR, FedRAMP, and OWASP Top 10, among others
  • Continuous evaluation of application security controls and relevant security standards

“While software development releases have shrunk from years to hours, enterprise application security processes are still slow, antiquated, and chaotic. ArmorCode has designed a massively scalable agentless platform from the ground up to help modernize application security,” said Nikhil Gupta, Co-founder and CEO of ArmorCode. “We already have a strong remote-first team of more than 25 members and this initial funding will enable us to realize our dream of democratizing application security.”

About ArmorCode

ArmorCode is delivering application security at the speed of DevOps. Founded in 2020 in Palo Alto, California, the company offers security professionals a centralized platform for Application Security Posture Management; DevSecOps Orchestration; and Continuous compliance. With ArmorCode, enterprises can radically simplify and accelerate application security while cutting costs by as much as 50%. ArmorCode is used by global brands and backed by leading VC firms and security industry experts. To learn more, please visitarmorcode.com.

Media Contact:
Aaron Endre
aaron@aaronendre.com

Newswire heading: Unveils Next Generation Application Security Platform

AccuKnox Secures $4.6M in Seed Funding to Meet Growing Demand for Zero-Trust Kubernetes Security Solutions

National Grid Partners leads seed investment in Zero Trust Cloud Security innovator AccuKnox, with strategic investment from SRI, Z5Capital and Outliers VC.

MENLO PARK, April 27, 2021 – AccuKnox, formed in partnership with SRI (Stanford Research Institute), today announced that it has closed an over-subscribed $4.6 million seed financing led by National Grid Partners, with strategic participation from SRI, Z5Capital and Outliers VC. The funding will be used to expand the engineering team and enable AccuKnox to further capitalize on its technology innovations in security and Zero Trust in dynamic, cloud-native environments. The company plans to deliver on open source and commercial offerings before the end of this year.

In addition, AccuKnox announced that Phil Porras, Program Director and Internet Security Group Leader, Computer Science Laboratory, SRI, has joined the company, assuming the role of Chief Scientist. The founding team includes Asif Ali, Nat Natraj, Rahul Jadhav and Phil Porras.

AccuKnox is a Zero Trust run-time Kubernetes security platform that leverages an identity-driven approach. Kubernetes, one of the fastest growing open source projects, is the foundation of cloud-native applications. AccuKnox is the founding team behind KubeArmor, an open source run-time security enforcement system that leverages Linux Security Module (LSM). The company’s technology is anchored on seminal patented innovations in container security, un-supervised learning and data provenance developed at Stanford Research Institute.

AccuKnox leverages best in class foundational open source platforms like eBPFSPIFFEOPA, and  Kyverno and provides a comprehensive security, compliance and governance platform for Public Cloud and Private clouds.

“The AccuKnox team under Nat Natraj’s leadership is made up of proven, cloud-native DevSecOps professionals,” said Lisa Lambert, Chief Technology and Innovation Officer at National Grid and Founder and President of National Grid Partners. “We’re confident that the combination of SRI innovations and AccuKnox’s proven team will deliver a category leading, Zero Trust security platform to address emerging threats.”

 

National Grid Partners Director Raghu Madabushi will join the AccuKnox board.

Gartner projects that by 2022, more than 75% of global organizations will be running containerized applications in production. Due to this unprecedented growth in Private and Public Cloud container deployments, industry analysts forecast the container security market to reach $2.25 billion by 2023.

“We are thrilled to team with NationalGrid and SRI to launch AccuKnox. I am equally excited that top caliber Cloud Native tech leaders like Asif Ali and Rahul Jadhav; and Phil Porras, cybersecurity industry luminary, are a part of our amazing founding team. As organizations embrace Kubernetes as a foundational aspect of their digital transformation efforts, run-time security, governance and compliance are strategic imperatives. AccuKnox is uniquely poised to deliver a compelling platform,” said Nat Natraj, co-founder, CEO, AccuKnox. “This is a massive market opportunity for AccuKnox, and we look forward to working in close concert with our new investors to deliver run-time Kubernetes and data security in a DevSecOps model.”

SRI’s pioneering R&D contributions (including the computer mouse, SIRI, Robotic Surgery and Intrusion Detection) are foundational to modern society. “Our nonprofit mission is to use technology to make the world safer, healthier and more productive,” said Dr. Manish Kothari, President of SRI International. “Combining our cutting-edge research with a proven team at AccuKnox allows us to do just that. We are thrilled to be partnering with Nat Natraj and the AccuKnox team.”

AccuKnox is working with security leaders and is targeting GA (General Availability) of its platform in Q4 2021.

  • “Container usage for production deployments in enterprises is still constrained by concerns regarding security, monitoring, data management and networking.” — Gartner, Best Practices for Running Containers and Kubernetes in Production, August 4, 2020.
  • “Container adoption is increasing, and security must come along for the ride. Organizations value the scalability and agility that containers offer, but containers introduce new security challenges that can’t be addressed with traditional security and networking tools. Commonly accepted security tools like vulnerability scanners, network forensics, and endpoint detection and response (EDR) are too heavyweight for a container environment. Security pros need cloud native tools that are purpose-built for high scale, lightweight, ephemeral container environments.” — Best Practices For Container Security, Forrester Research, July 24, 2020.
  • “AccuKnox’s foundational innovations in the areas of container security, un-supervised Learning and data provenance are precisely what is needed for delivering a comprehensive and robust cloud native Zero-Trust security platform.” — Chase Cunningham, Renowned CyberSecurity Analyst and Zero-Trust expert.

About AccuKnox

AccuKnox provides a Zero Trust Run-time Kubernetes Security platform. AccuKnox is built in partnership with SRI (Stanford Research Institute) and is anchored on seminal inventions in the areas of: Container Security, Anomaly Detection and Data Provenance. AccuKnox can be deployed in Public and Private Cloud environments. Visit www.accuknox.com or follow us on Twitter (@accuknox).

About National Grid Partners

National Grid Partners (NGP) is the venture investment and innovation arm of National Grid plc., one of the largest investor-owned energy companies in the world. NGP invests for strategic and financial impact and leads company wide disruptive innovation efforts. The organization provides a multi-functional approach to building startups, including innovation (new business creation), incubation, corporate venture capital, business development and culture acceleration. NGP is headquartered in Silicon Valley and has offices in Boston, London, and New York. Visit ngpartners.com or follow us on Twitter (@ngpartners_).

Contact:

Nat Natraj, co-founder, CEO

n@accuknox.com

@N_SiliconValley

Prescient Devices secures $2M funding for low-code IoT development software

Prescient Devices, a platform for internet of things (IoT) software and service development, today announced that it raised $2 million in seed funding. The company says it’ll put the proceeds toward product ideation and ramping up its sales and marketing programs.

Global IoT revenue hit an estimated $1.7 trillion in 2019, when the number of edge devices connected to the internet exceeded 23 billion, according to CB Insights. But despite the industry’s growth, not all organizations think they’re ready for it. In a recent Kaspersky Lab survey, 54% said the risks associated with connectivity and integration of IoT ecosystems remained a significant blocker.

Prescient offers a low-code programmable platform that allows system integrators, IT engineers, and data scientists to build IoT and edge computing solutions. The platform, which can deploy firmware to fleets of IoT devices, delivers templates that connects sensors to the cloud, enabling remote monitoring and industrial automation. Prescient customers gain access to drag-and-drop graphical programming interfaces, modules, and recipes that they can use to program edge devices, edge and cloud dashboards, and cloud functions. They’re also provided a library of reference solutions for popular sensors and devices.

There’s an abundance of tools promising to simplify IoT development and management at the edge including Google’s Cloud IoT Edge, Amazon’s Amazon Web Services (AWS) IoT, Microsoft’s Azure Sphere, and Baidu’s OpenEdge, as well as Zededa, Particle, and Balena. But CEO Andy Wang asserts that Prescient has an advantage in the scalability of its approach.

“We uniquely focus on removing the technology barrier for engineers, integrators, and data scientists to build, and accelerate IoT applications, helping deliver new business applications to the commercial market. The growing interest and active engagement from our users have been amazing,” Wang said in a press release.

Pandemic-fueled growth
In what’s been a boon for Prescient, the pandemic has contributed to the growth of the larger IoT market. Microsoft’s 2020 IoT Signals report indicates that 33% of decision makers plan to up their IoT investments, while 41% say their existing investments will remain the same. Meanwhile, a recent Deloitte survey found that respondents believe IoT will have the largest impact on their organizations compared with AI and cloud infrastructure.

“Our growing community has already developed active IoT applications for predictive maintenance, machine vision, and test automation within weeks of concept and transforming the entire approach to IoT business automation and edge intelligence applications,” Wang continued. “This round of funding will help accelerate our ability to better support our customers while expanding [the Prescient platform’s] functionality.”

Z5 Capital led Boston, Massachusetts-based Prescient’s latest funding round, which had participation from angel investors at MIT and the Harvard Business School.