Perform machine learning on MongoDB with TensorFlow

The TensorFlow logo on a plain grey background with undulating lines embelishing the top and bottom

With REFORM and TensorFlow you can start training machine learning models on MongoDB in minutes.

MongoDB stores data in flexible, JSON-like documents, meaning fields can vary from document to document and data structure can be changed over time. The document model maps to the objects in your application code, making data easy to work with. MongoDB is a distributed database at its core, so high availability, horizontal scaling, and geographic distribution are built in and easy to use.

JSON is not a normal tabular data format. Unlike tabular data the structure of each piece of JSON is tailored to a specific purpose. For example a piece of JSON for a form about you and your pets would have a very different structure to a piece of JSON for a manufacturing dashboard. Machine learning models expect standard tabular data just as spreadsheets and charts do. In order to use JSON to train our models and make predictions we need to transform it into meaningful tables.

A screenshot of REFORM. The structure of a JSON dataset is presented as a filesystem which is being browsed and from which relational columns are being picked

REFORM lets you access as tables in TensorFlow for training and prediction. Simply provide the details of the MongoDB cluster, browse even the most complex data as if it were a file browser and pick what you're interested in. Then simply load the dataframe with Pandas.


You can also optionally download the file file_name = tf.keras.utils.get_file("example", "")

REFORM magically transforms the latest data into a mathematically correct analytic ready table and feeds this into your models for training and prediction.

The TensorFlow logo with shadows to the left and right highlighting the T and F which make up this logo

REFORM also supports use cases where data needs additional JOINs or GROUP BYs before being used for machine learning. REFORM will transform your data into tables in AWS Redshift, Google BigQuery, Snowflake, AWS Athena and MS SQL Server which support JOINs and GROUP BYs and can be used as data sources by TensorFlow.