The ZKML Workflow
What's different about zero-knowledge machine learning projects? How do you limit risks and build a good solution?
In conventional software development, the process begins with addressing the question:
"What is your implementation goal?"
Following that, you proceed with the actual implementation.
Starting such use cases becomes very straight forward using ZK Coprocessors. You just write the implementation logic, make sure there are no bugs in the circuit, generate proofs and verify in the blockchain.
However, in machine learning solutions, the initial step involves exploring what's possible within your existing data. Thus, the primary question becomes:
"What is feasible for you to implement?"
Aspect | Conventional Software | Machine Learning |
---|---|---|
Risk | Will users love it? | Is it possible? |
Methodology | Define | Explore |
North Star | User Stories | Test Dataset |
Quality Target | As good as possible | Good enough |
Here is what I learned over the years working in multiple machine learning projects and companies and how I apply it now to ZKML.
Define the task
It's simple to get caught up in AI projects that end up going nowhere. A well-defined machine learning project greatly lowers this risk.
Here are the essential questions you need to answer to outline a project.
Comprehend the Existing Process
What does your present process entail? Your machine learning solution will be taking the place of an existing process. How are decisions currently made in this process? Understanding the existing process will provide valuable domain knowledge and assist in shaping your machine learning system.
Specify Your Prediction Goal
What specific variable are you aiming to predict? Detail the expected output of your machine learning system as precisely as possible.
Identify Relevant Data Sources
What available data can effectively predict this output? Begin by identifying data sources that the current process depends on. A method to pinpoint relevant data sources is to ask yourself: “If I were to make this prediction manually, what information would I need?”
This is specially challenging in the current state of building machine learning solutions for the blockchain and for protocols. Blockchains are extremely rich in data available to consume. However, the available resources for developers to access this data without the need to operate their own nodes are quite scattered and vary greatly in data quality. Lately, excellent solutions such as Cryo have emerged, yet the data they provide is not immediately suitable for machine learning applications and requires further processing.
By grasping the current process, clarifying your prediction objective, and pinpointing all pertinent data sources, you position yourself well to assess whether moving to the next phase is a logical step.
Make sure it works
Even with a well-defined problem, it's impossible to predict the final accuracy of your machine learning model or if it will be beneficial to replace the existing process.
Conducting a proof of concept is the most cost-effective method to determine the potential return on investment (ROI) of your final solution. Here are the steps involved.
Research
Study how other teams have tackled similar challenges, with or without machine learning. Use your findings and the knowledge from the current process you aim to replace to create a strategy.
Create a Dataset
The cornerstone of any machine learning project is a representative dataset. This dataset should contain real-life examples of the scenarios you want your machine learning system to accurately predict. Picture it as a spreadsheet that includes:
- A row for each instance,
- Several columns with relevant input data,
- A column for the output (or the target).
The goal is for the model to learn to predict the output based on the input. For instance, predicting a customer's credit rating (output) from their payment history (input).
This dataset acts like the requirements document in traditional software projects, serving as the benchmark to gauge your progress.
Understand your problem
When building ZKML solutions, there has to be a complete understanding of the problem and needs such as:
- What type of protocol you are building for.
- What are the requirements in terms of blocks of latency.
- Value / cost of the ZKML solution compared to the existing process.
- How the ZKML solution is going to be integrated in such protocol.
Experiment
Begin with the most promising method, assess its performance, and then iterate to enhance it. Continue this process until you identify an approach that meets your criteria and is good enough.
The journey to production
While a proof-of-concept won't generate revenue, the following steps will guide you towards a stable, large-scale solution.
"Working software is the primary measure of progress" — The Agile Manifesto.
Enhance Accuracy
Initially, a proof-of-concept is a basic 20/80 implementation. Now, focus on the vital improvements omitted in the first round:
- Incorporate additional data.
- Develop new features.
- Experiment with different algorithms.
- Optimize the model parameters.
Transition to ZKML
You usually convert your model into a verifiable model using a transpiler. Make sure you perform enough quality checks to assure same behaviour during conversion as multiple optimizations can happen during this process. Quantization, floats to fixed point and more ZK backend specific changes are usually performed.
The good enough solution is relevant in this case as you want to find a balance between cost and performance of running a ZKML model. Probably, using a bidirectional LSTM could be ideal for the forecasting problem you are trying to solve but maybe an XGBoost algorithm obtains almost same metrics with significant lower computation requirements.
Perform benchmarks and integration tests to make sure the whole ZKML solution meets the use case criteria in terms of cost, performance and latency before scaling and integrating the solution.
Scaling
Transitioning from a proof-of-concept script to a production-grade solution is a significant leap.
- Scalability & Stability: Transform data processing steps into distinct, scalable components within a data pipeline.
- Testing: Implement comprehensive unit and integration tests, including scenarios for potential data errors.
- Deployment: Establish a robust, automated deployment process that can handle the necessary throughput and processing speed, including automated infrastructure setup from the inference endpoint, the ZK prover and verifier integration.
Conduct A/B Testing
Like other software upgrades, the ultimate test for your automated process is to compare it with the existing system. An A/B test allows you to evaluate the improvements and ROI of your project.
Implement an API
Your machine learning service requires a method to communicate with the rest of your infrastructure, either by consistently updating a database or through an API.
Verifier integration
In Zero-Knowledge Machine Learning (ZKML), the verifier plays a crucial role in ensuring the integrity and validity of the machine learning model's outputs allowing trustless usage of inferences in the blockchain. This integration is critical for modifying the state of the smart contracts where this result is going to be used to replace the current process.
Document Everything
Beyond coding documentation, consider creating a user guide explaining the solution's workings. It's essential to articulate the logic behind the implementation, as the rationale may not be evident from the code in data science.
Consider Optional Enhancements
- Version Control: Useful for A/B testing against older models or swiftly reverting to previous pipeline versions.
- Automated Retraining: Models become outdated; automating the update process with new data can be beneficial in some scenarios.
If you need help with a zero-knowledge machine learning problem, get in touch with me: fran@gizatech.xyz