
<rss version="2.0" xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:media="http://search.yahoo.com/mrss/">
    <channel>
        <title><![CDATA[ The Cloudflare Blog ]]></title>
        <description><![CDATA[ Get the latest news on how products at Cloudflare are built, technologies used, and join the teams helping to build a better Internet. ]]></description>
        <link>https://blog.cloudflare.com</link>
        <atom:link href="https://blog.cloudflare.com/" rel="self" type="application/rss+xml"/>
        <language>en-us</language>
        
        <lastBuildDate>Tue, 14 Apr 2026 23:03:15 GMT</lastBuildDate>
        <item>
            <title><![CDATA[Explore your Cloudflare data with Python notebooks, powered by marimo]]></title>
            <link>https://blog.cloudflare.com/marimo-cloudflare-notebooks/</link>
            <pubDate>Wed, 16 Jul 2025 13:00:00 GMT</pubDate>
            <description><![CDATA[ We’ve partnered with marimo to bring their best-in-class Python notebook experience to your Cloudflare data. ]]></description>
            <content:encoded><![CDATA[ <p>Many developers, data scientists, and researchers do much of their work in Python notebooks: they’ve been the de facto standard for data science and sharing for well over a decade. Notebooks are popular because they make it easy to code, explore data, prototype ideas, and share results. We use them heavily at Cloudflare, and we’re seeing more and more developers use notebooks to work with data – from analyzing trends in HTTP traffic, querying <a href="https://developers.cloudflare.com/analytics/analytics-engine/"><u>Workers Analytics Engine</u></a> through to querying their own <a href="https://blog.cloudflare.com/r2-data-catalog-public-beta/"><u>Iceberg tables stored in R2</u></a>.</p><p>Traditional notebooks are incredibly powerful — but they were not built with collaboration, reproducibility, or deployment as data apps in mind. As usage grows across teams and workflows, these limitations face the reality of work at scale.</p><p><a href="https://marimo.io/"><b><u>marimo</u></b></a> reimagines the notebook experience with these <a href="https://marimo.io/blog/lessons-learned"><u>challenges in mind</u></a>. It’s an <a href="https://github.com/marimo-team/marimo"><u>open-source</u></a> reactive Python notebook that’s built to be reproducible, easy to track in Git, executable as a standalone script, and deployable. We have partnered with the marimo team to bring this streamlined, production-friendly experience to Cloudflare developers. Spend less time wrestling with tools and more time exploring your data.</p><p>Today, we’re excited to announce three things:</p><ul><li><p><a href="https://notebooks.cloudflare.com/html-wasm/_start"><u>Cloudflare auth built into marimo notebooks</u></a> – Sign in with your Cloudflare account directly from a notebook and use Cloudflare APIs without needing to create API tokens</p></li><li><p><a href="https://github.com/cloudflare/notebook-examples"><u>Open-source notebook examples</u></a> – Explore your Cloudflare data with ready-to-run notebook examples for services like <a href="https://developers.cloudflare.com/r2/"><u>R2</u></a>, <a href="https://developers.cloudflare.com/workers-ai/"><u>Workers AI</u></a>, <a href="https://developers.cloudflare.com/d1/"><u>D1</u></a>, and more</p></li><li><p><a href="https://github.com/cloudflare/containers-demos"><u>Run marimo on Cloudflare Containers</u></a> – Easily deploy marimo notebooks to Cloudflare Containers for scalable, long-running data workflows</p></li></ul><p>Want to start exploring your Cloudflare data with marimo right now? Head over to <a href="http://notebooks.cloudflare.com"><u>notebooks.cloudflare.com</u></a>. Or, keep reading to learn more about marimo, how we’ve made authentication easy from within notebooks, and how you can use marimo to explore and share notebooks and apps on Cloudflare.</p>
    <div>
      <h3>Why marimo?</h3>
      <a href="#why-marimo">
        
      </a>
    </div>
    <p>marimo is an <a href="https://docs.marimo.io/"><u>open-source</u></a> reactive Python notebook designed specifically for working with data, built from the ground up to solve many problems with traditional notebooks.</p><p>The core feature that sets marimo apart from traditional notebooks is its <a href="https://marimo.io/blog/lessons-learned"><u>reactive execution model</u></a>, powered by a statically inferred dataflow graph on cells. Run a cell or interact with a <a href="https://docs.marimo.io/guides/interactivity/"><u>UI element</u></a>, and marimo either runs dependent cells or marks them as stale (your choice). This keeps code and outputs consistent, prevents bugs before they happen, and dramatically increases the speed at which you can experiment with data. </p><p>Thanks to reactive execution, notebooks are also deployable as data applications, making them easy to share. While you can run marimo notebooks locally, on cloud servers, GPUs — anywhere you can traditionally run software — you can also run them entirely in the browser <a href="https://docs.marimo.io/guides/wasm/"><u>with WebAssembly</u></a>, bringing the cost of sharing down to zero.</p><p>Because marimo notebooks are stored as Python, they <a href="https://marimo.io/blog/python-not-json"><u>enjoy all the benefits of software</u></a>: version with Git, execute as a script or pipeline, test with pytest, inline package requirements with uv, and import symbols from your notebook into other Python modules. Though stored as Python, marimo also <a href="https://docs.marimo.io/guides/working_with_data/sql/"><u>supports SQL</u></a> and data sources like DuckDB, Postgres, and Iceberg-based data catalogs (which marimo's <a href="https://docs.marimo.io/guides/generate_with_ai/"><u>AI assistant</u></a> can access, in addition to data in RAM).</p><p>To get an idea of what a marimo notebook is like, check out the embedded example notebook below:</p><div>
   <div>
       
   </div>
</div>
<p></p>
    <div>
      <h3>Exploring your Cloudflare data with marimo</h3>
      <a href="#exploring-your-cloudflare-data-with-marimo">
        
      </a>
    </div>
    <p>Ready to explore your own Cloudflare data in a marimo notebook? The easiest way to begin is to visit <a href="http://notebooks.cloudflare.com"><u>notebooks.cloudflare.com</u></a> and run one of our example notebooks directly in your browser via <a href="https://webassembly.org/"><u>WebAssembly (Wasm)</u></a>. You can also browse the source in our <a href="https://github.com/cloudflare/notebook-examples"><u>notebook examples GitHub repo</u></a>.</p><p>Want to create your own notebook to run locally instead? Here’s a quick example that shows you how to authenticate with your Cloudflare account and list the zones you have access to:</p><ol><li><p>Install <a href="https://docs.astral.sh/uv/"><u>uv</u></a> if you haven’t already by following the <a href="https://docs.astral.sh/uv/getting-started/installation/"><u>installation guide</u></a>.</p></li><li><p>Create a new project directory for your notebook:</p></li></ol>
            <pre><code>mkdir cloudflare-zones-notebook
cd cloudflare-zones-notebook</code></pre>
            <p>3. Initialize a new uv project (this creates a <code>.venv</code> and a <code>pyproject.toml</code>):</p>
            <pre><code>uv init</code></pre>
            <p>4. Add marimo and required dependencies:</p>
            <pre><code>uv add marimo</code></pre>
            <p>5. Create a file called <code>list-zones.py</code> and paste in the following notebook:</p>
            <pre><code>import marimo

__generated_with = "0.14.10"
app = marimo.App(width="full", auto_download=["ipynb", "html"])


@app.cell
def _():
    from moutils.oauth import PKCEFlow
    import requests

    # Start OAuth PKCE flow to authenticate with Cloudflare
    auth = PKCEFlow(provider="cloudflare")

    # Renders login UI in notebook
    auth
    return (auth,)


@app.cell
def _(auth):
    import marimo as mo
    from cloudflare import Cloudflare

    mo.stop(not auth.access_token, mo.md("Please **sign in** using the button above."))
    client = Cloudflare(api_token=auth.access_token)

    zones = client.zones.list()
    [zone.name for zone in zones.result]
    return


if __name__ == "__main__":
    app.run()</code></pre>
            <p>6. Open the notebook editor:</p>
            <pre><code>uv run marimo edit list-zones.py --sandbox</code></pre>
            <p>7. Log in via the OAuth prompt in the notebook. Once authenticated, you’ll see a list of your Cloudflare zones in the final cell.</p><p>That’s it! From here, you can expand the notebook to call <a href="https://developers.cloudflare.com/workers-ai/"><u>Workers AI</u></a> models, query Iceberg tables in <a href="https://developers.cloudflare.com/r2/data-catalog/"><u>R2 Data Catalog</u></a>, or interact with any Cloudflare API.</p>
    <div>
      <h3>How OAuth works in notebooks</h3>
      <a href="#how-oauth-works-in-notebooks">
        
      </a>
    </div>
    <p>Think of OAuth like a secure handshake between your notebook and Cloudflare. Instead of copying and pasting API tokens, you just click “Sign in with Cloudflare” and the notebook handles the rest.</p><p>We built this experience using PKCE (Proof Key for Code Exchange), a secure OAuth 2.0 flow that avoids client secrets and protects against code interception attacks. PKCE works by generating a one-time code that’s exchanged for a token after login, without ever sharing a client secret. <a href="https://auth0.com/docs/get-started/authentication-and-authorization-flow/authorization-code-flow-with-pkce"><u>Learn more about how PKCE works</u></a>.</p><p>The login widget lives in <a href="https://github.com/marimo-team/moutils/blob/main/notebooks/pkceflow_login.py"><u>moutils.oauth</u></a>, a collaboration between Cloudflare and marimo to make OAuth authentication simple and secure in notebooks. To use it, just create a cell like this:</p>
            <pre><code>auth = PKCEFlow(provider="cloudflare")

# Renders login UI in notebook
auth</code></pre>
            <p>When you run the cell, you’ll see a Sign in with Cloudflare button:</p>
          <figure>
          <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/2r3Dmuwcm4AZrhV39Gkhyl/c3f98a3780bc29f1c01ea945621fc005/image2.png" />
          </figure><p>Once logged in, you’ll have a read-only access token you can pass when using the Cloudflare API.</p>
    <div>
      <h3>Running marimo on Cloudflare: Workers and Containers</h3>
      <a href="#running-marimo-on-cloudflare-workers-and-containers">
        
      </a>
    </div>
    <p>In addition to running marimo notebooks locally, you can use Cloudflare to share and run them via <a href="https://developers.cloudflare.com/workers/static-assets/"><u>Workers Static Assets</u></a> or <a href="https://developers.cloudflare.com/containers/"><u>Cloudflare Containers</u></a>.</p><p>If you have a local notebook you want to share, you can publish it to Workers. This works because marimo can export notebooks to WebAssembly, allowing them to run entirely in the browser. You can get started with just two commands:</p>
            <pre><code>marimo export html-wasm notebook.py -o output_dir --mode edit --include-cloudflare
npx wrangler deploy
</code></pre>
            <p>If your notebook needs authentication, you can layer in <a href="https://developers.cloudflare.com/cloudflare-one/policies/access/"><u>Cloudflare Access</u></a> for secure, authenticated access.</p><p>For notebooks that require more compute, persistent sessions, or long-running tasks, you can deploy marimo on our <a href="https://blog.cloudflare.com/containers-are-available-in-public-beta-for-simple-global-and-programmable/"><u>new container platform</u></a>. To get started, check out our <a href="https://github.com/cloudflare/containers-demos/tree/main/marimo"><u>marimo container example</u></a> on GitHub.</p>
    <div>
      <h3>What’s next for Cloudflare + marimo</h3>
      <a href="#whats-next-for-cloudflare-marimo">
        
      </a>
    </div>
    <p>This blog post marks just the beginning of Cloudflare's partnership with marimo. While we're excited to see how you use our joint WebAssembly-based notebook platform to explore your Cloudflare data, we also want to help you bring serious compute to bear on your data — to empower you to run large scale analyses and batch jobs straight from marimo notebooks. Stay tuned!</p> ]]></content:encoded>
            <category><![CDATA[Developer Platform]]></category>
            <category><![CDATA[API]]></category>
            <category><![CDATA[R2]]></category>
            <category><![CDATA[Data Catalog]]></category>
            <category><![CDATA[Notebooks]]></category>
            <guid isPermaLink="false">1oYZ3vFOAUy5PhZyKNm286</guid>
            <dc:creator>Carlos Rodrigues</dc:creator>
            <dc:creator>Jorge Pacheco</dc:creator>
            <dc:creator>Keith Adler</dc:creator>
            <dc:creator>Akshay Agrawal (Guest Author)</dc:creator>
            <dc:creator>Myles Scolnick (Guest Author)</dc:creator>
        </item>
        <item>
            <title><![CDATA[A look inside the Cloudflare ML Ops platform]]></title>
            <link>https://blog.cloudflare.com/mlops/</link>
            <pubDate>Thu, 07 Dec 2023 14:00:42 GMT</pubDate>
            <description><![CDATA[ To help our team continue to innovate efficiently, our MLOps effort has collaborated with Cloudflare’s data scientists to implement the following best practices ]]></description>
            <content:encoded><![CDATA[ <p></p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/4GvjtY1dTv1z9QsDBcP5q9/9430817e4123b426b6e26b1ef02d361a/image1.png" />
            
            </figure><p>We've been relying on ML and AI for our core services like Web Application Firewall (WAF) since the early days of Cloudflare. Through this journey, we've learned many lessons about running AI deployments at scale, and all the tooling and processes necessary. We recently launched <a href="/workers-ai/">Workers AI</a> to help abstract a lot of that away for inference, giving developers an easy way to leverage powerful models with just a few lines of code. In this post, we’re going to explore some of the lessons we’ve learned on the other side of the ML equation: <i>training</i>.</p><p>Cloudflare has extensive experience training models and using them to improve our products. A constantly-evolving ML model drives the <a href="/data-generation-and-sampling-strategies/">WAF attack score</a> that helps protect our customers from malicious payloads. Another evolving model powers our <a href="/stop-the-bots-practical-lessons-in-machine-learning/">bot management</a> product to catch and <a href="/cloudflare-bot-management-machine-learning-and-more/">prevent bot attacks</a> on our <a href="/machine-learning-mobile-traffic-bots/">customers</a>. Our customer support is <a href="/using-data-science-and-machine-learning-for-improved-customer-support/">augmented by data science</a>. We build machine learning to <a href="/threat-detection-machine-learning-models/">identify threats</a> with our global network. To top it all off, Cloudflare is delivering <a href="/scalable-machine-learning-at-cloudflare/">machine learning at unprecedented scale</a> across our network.</p><p>Each of these products, along with many others, has elevated ML models — including experimentation, training, and deployment — to a crucial position within Cloudflare. To help our team continue to innovate efficiently, our MLOps effort has collaborated with Cloudflare’s data scientists to implement the following best practices.</p>
    <div>
      <h3>Notebooks</h3>
      <a href="#notebooks">
        
      </a>
    </div>
    <p>Given a use case and data, the first step for many Data Scientist/AI Scientists is to set up an environment for exploring data, feature engineering, and model experiments. <a href="https://docs.jupyter.org/en/latest/start/index.html">Jupyter Notebooks</a> are a common tool to satisfy these requirements. These environments provide an easy remote Python environment that can be run in the browser or connected to a local code editor. To make notebooks scalable and open to collaboration, we deploy <a href="https://jupyter.org/hub">JupyterHub</a> on Kubernetes. With JupyterHub, we can manage resources for teams of Data Scientists and ensure they get a suitable development environment. Each team can tailor their environment by pre-installing libraries and configuring user environments to meet the specific needs, or even individual projects.</p><p>This notebook space is always evolving as well. Open source projects include further features, such as:</p><ul><li><p><a href="https://nbdev.fast.ai/">nbdev</a> - a Python package to improve the notebook experience</p></li><li><p><a href="https://www.kubeflow.org/docs/components/notebooks/overview/">Kubeflow</a> - the kubernetes native CNCF project for machine learning</p></li><li><p><a href="https://www.deploykf.org/">deployKF</a> - ML Platforms on any Kubernetes cluster, with centralized configs, in-place upgrades, and support for leading ML &amp; Data tools like Kubeflow, Airflow, and MLflow</p></li></ul>
    <div>
      <h3>GitOps</h3>
      <a href="#gitops">
        
      </a>
    </div>
    <p>Our goal is to provide an easy-to-use platform for Data Scientists and AI Scientists to develop and test machine learning models quickly. Hence, we are adopting GitOps as our continuous delivery strategy and infrastructure management on MLOps Platform in Cloudflare. GitOps is a software development methodology that leverages Git, a distributed version control system, as the single source of truth for defining and managing infrastructure, application configurations, and deployment processes. It aims to automate and streamline the deployment and management of applications and infrastructure in a declarative and auditable manner. GitOps aligns well with the principles of automation and collaboration, which are crucial for <a href="https://www.cloudflare.com/learning/ai/what-is-machine-learning/">machine learning (ML)</a> workflows. GitOps leverages Git repositories as the source of truth for declarative infrastructure and application code.</p><p>A data scientist needs to define the desired state of infrastructure and applications. This usually takes a lot of custom work, but with <a href="https://argo-cd.readthedocs.io/en/stable/">ArgoCD</a> and model templates, all it takes is a simple pull request to add new applications. Helm charts and Kustomize are both supported to allow for configuration for different environments and jobs. With ArgoCD, declarative GitOps will then start the Continuous Delivery process. ArgoCD will continuously check the desired state of the infrastructure and applications to ensure that they are synched with the Git repository.</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/46I2rJ6bnhsXmVC8f7sK3i/7b47bc18341611aad94ebbb81cb30be3/image2.png" />
            
            </figure><p>In the future, we plan to migrate our platform (including Jupyterhub) to <a href="https://www.kubeflow.org/">Kubeflow</a>, a machine learning workflow platform on Kubernetes that simplifies the development, deployment, and management of notebooks and end-to-end pipelines. This is best deployed using a new project, <a href="https://www.deploykf.org/">deployKF</a>, which allows for distributed configuration management across multiple components available with Kubeflow, and others that extend beyond what is offered within Kubeflow.</p>
    <div>
      <h3>Templates</h3>
      <a href="#templates">
        
      </a>
    </div>
    <p>Starting a project with the right tools and structure can be the difference between success and stagnation. Within Cloudflare, we've curated an array of model templates, which are production ready data science repositories with an example model. These model templates are deployed through production to continually ensure they are a stable foundation for future projects. To start a new project, all it takes is one Makefile command to build a new CICD project in the git project of the users’ choosing. These template utility packages are identical to those used in our Jupyter Notebooks and connections to R2 / S3 / GCS buckets, D1 / Postgres / Bigquery / Clickhouse databases. Data scientists can use these templates to instantly kickstart new projects with confidence. These templates are not yet publicly available, but our team plans to open source them in the future.</p><p><b>1. Training Template</b>Our model training template provides a solid foundation to build any model. This is configured to help extract, transform, and load data (ETL) from any data source. The template includes helper functions for feature engineering, tracking experiments with model metadata, and choose orchestration through a Directed Acyclic Graph (DAG) to productionalize the model pipeline. Each orchestration can be configured to use <a href="https://github.com/airflow-helm/charts">Airflow</a> or <a href="https://argoproj.github.io/argo-workflows/">Argo Workflows</a>.</p><p><b>2. Batch Inference Template</b>Batch and micro-batch inference can make a significant impact on processing efficiency. Our batch inference model template to schedule models for consistent results, and can be configured to use <a href="https://github.com/airflow-helm/charts">Airflow</a> or <a href="https://argoproj.github.io/argo-workflows/">Argo Workflows</a>.</p><p><b>3. Stream Inference Template</b>This template makes it easy for our team to deploy real-time inference. Tailored for Kubernetes as a microservice using <a href="https://fastapi.tiangolo.com/">FastAPI</a>, this template allows our team to run inference using familiar Python in a container. This microservice already has built-in REST interactive documentation with <a href="https://swagger.io/">Swagger</a> and integration with Cloudflare Access authentication tokens in <a href="https://developers.cloudflare.com/cloudflare-one/api-terraform/access-with-terraform/">terraform</a>.</p><p><b>4. Explainability Template</b>Our model template for explainability spins up dashboards to illuminate the model type and experiments. It is important to be able to understand key values such as a time window F1 score, the drift of features and data over time. Tools like <a href="https://streamlit.io/">Streamlit</a> and <a href="https://bokeh.org/">Bokeh</a> help to make this possible.</p>
    <div>
      <h3>Orchestration</h3>
      <a href="#orchestration">
        
      </a>
    </div>
    <p>Organizing data science into a consistent pipeline involves a lot of data and several model versions. Enter Directed Acyclic Graphs (DAGs), a robust flow chart orchestration paradigm that weaves together the steps from data to model, and model to inference. There are many unique approaches to running DAG pipelines, but we have found that data science teams' preference comes first. Each team has different approaches based on their use cases and experience.</p><p><a href="https://github.com/airflow-helm/charts"><b>Apache Airflow</b></a> <b>- The Standard DAG Composer</b>Apache Airflow is the standard as a DAG (Directed Acyclic Graphs)-based orchestration approach. With a vast community and extensive plugin support, <a href="/automating-data-center-expansions-with-airflow/">Airflow excels in handling diverse workflows</a>. The flexibility to integrate with a multitude of systems and a web-based UI for task monitoring make it a popular choice for orchestrating complex sequences of tasks. Airflow can be used to run any data or machine learning workflow.</p><p><a href="https://argoproj.github.io/argo-workflows/"><b>Argo Workflows</b></a> <b>- Kubernetes-native Brilliance</b>Built for Kubernetes, Argo Workflows embraces the container ecosystem for orchestrating workflows. It boasts an intuitive YAML-based workflow definition and excels in running microservices-based workflows. The integration with Kubernetes enables scalability, reliability, and native container support, making it an excellent fit for organizations deeply rooted in the Kubernetes ecosystem. Argo Workflows can also be used to run any data or machine learning workflow.</p><p><a href="https://www.kubeflow.org/docs/components/pipelines/v2/introduction/"><b>Kubeflow Pipelines</b></a> <b>- A Platform for Workflows</b>Kubeflow Pipelines is a more specific approach tailored for orchestrating machine learning workflows. “KFP” aims to address the unique demands of data preparation, model training, and deployment in the ML landscape. As an integrated component of the Kubeflow ecosystem, it streamlines ML workflows with a focus on collaboration, reusability, and versioning. Its compatibility with Kubernetes ensures seamless integration and efficient orchestration.</p><p><a href="https://temporal.io/"><b>Temporal</b></a> <b>- The Stateful Workflow Enabler</b>Temporal takes a stance by emphasizing the orchestration of long-running, stateful workflows. This relative newcomer shines in managing resilient, event-driven applications, preserving workflow state and enabling efficient recovery from failures. The unique selling point lies in its ability to manage complex, stateful workflows, providing a durable and fault-tolerant orchestration solution.</p><p>In the orchestration landscape, the choice ultimately boils down to the team and use case. These are all open source projects, so the only limitation is support for different styles of work, which we find is worth the investment.</p>
    <div>
      <h3>Hardware</h3>
      <a href="#hardware">
        
      </a>
    </div>
    <p>Achieving optimal performance necessitates an understanding of workloads and the underlying use cases in order to provide teams with <a href="/cloudflares-gen-x-servers-for-an-accelerated-future/">effective hardware</a>. The goal is to enable data scientists and strike a balance between enablement and utilization. Each workload is different, and it is important to fine tune each use case for the capabilities of <a href="/bringing-ai-to-the-edge/">GPUs</a> and <a href="/debugging-hardware-performance-on-gen-x-servers/">CPUs</a> to find the perfect tool for the job.  For core datacenter workloads and edge inference, GPUs have leveled up the speed and efficiency that is core to our business. With <a href="https://www.cloudflare.com/learning/performance/what-is-observability/">observability</a> and metrics consumed by <a href="https://prometheus.io/">Prometheus</a>, metrics enable us to track orchestration to be optimized for <a href="/getting-to-the-core/">performance</a>, maximize hardware utilization, and operate within a Kubernetes-native experience.</p>
    <div>
      <h3>Adoption</h3>
      <a href="#adoption">
        
      </a>
    </div>
    <p>Adoption is often one of the most challenging steps in the MLops journey. Before jumping into building, it is important to understand the different teams and their approach to data science. At Cloudflare, this process began years ago, when each of the teams started their own machine learning solutions separately. As these solutions evolved, we ran into the common challenge of working across the company to prevent work from becoming isolated from other teams. In addition, there were other teams that had potential for machine learning but did not have data science expertise within their team. This presented an opportunity for MLops to step in — both to help streamline and standardize the ML processes being employed by each team, and to introduce potential new projects to data science teams to start the ideation and discovery process.</p><p>When able, we have found the most success when we can help get projects started and shape the pipelines for success. Providing components for shared use such as notebooks, orchestration, data versioning (DVC), feature engineering (Feast), and model versioning (MLflow) allow for teams to collaborate directly.</p>
    <div>
      <h3>Looking forward</h3>
      <a href="#looking-forward">
        
      </a>
    </div>
    <p>There is no doubt that data science is <a href="/best-place-region-earth-inference/">evolving our business</a> and the <a href="/ai-companies-building-cloudflare/">businesses of our customers</a>. We improve our own products with models, and have built <a href="/announcing-ai-gateway/">AI infrastructure</a> that can help us <a href="https://www.cloudflare.com/application-services/solutions/">secure applications</a> and <a href="/secure-generative-ai-applications/">applications built with AI</a>. We can leverage the <a href="/workers-ai/">power of our network to deliver AI</a> for us and our customers. We have partnered with <a href="/partnering-with-hugging-face-deploying-ai-easier-affordable/">machine</a> <a href="https://www.cloudflare.com/press-releases/2023/cloudflare-partners-with-databricks">learning</a> <a href="https://www.cloudflare.com/press-releases/2023/cloudflare-and-meta-collaborate-to-make-llama-2-available-globally">giants</a> to make it easier for the data science community to deliver real value from data.</p><p>The call to action is this: join the <a href="https://discord.com/invite/cloudflaredev">Cloudflare community</a> in bringing modern software practices and tools to data science. Be on the lookout for more data science from Cloudflare. Help us securely leverage data to help build a better Internet.</p> ]]></content:encoded>
            <category><![CDATA[AI]]></category>
            <category><![CDATA[Data]]></category>
            <category><![CDATA[Developers]]></category>
            <category><![CDATA[Machine Learning]]></category>
            <category><![CDATA[MLops]]></category>
            <category><![CDATA[Hardware]]></category>
            <guid isPermaLink="false">1NOp4Ep4CYMxL6OYRa2FaU</guid>
            <dc:creator>Keith Adler</dc:creator>
            <dc:creator>Rio Harapan Pangihutan</dc:creator>
        </item>
    </channel>
</rss>