Pipelines - OpenShift Data Science

Author: Rachel Lombard | Last edit: September 26, 2023 | Design type: Topology | Product area: OpenShift AI
Overview
Pipelines allow users to test and experiment models as well as enabling users to change different variables to measure performance. By experimenting and testing, the users train the models to allow for better understanding of the experiment.
Configuring a pipeline
Empty state
Users can create and import a pipeline in the project details page in Data Science Projects. This can be accessed by navigating through the Jump link section of the page to Pipelines. A wrench icon is shown to indicate configuration of a pipeline server, and that the button needs to be clicked in order to begin the process. Configuring a pipeline server and importing a pipeline can also be carried out in the Data Science Pipelines section of the left nav, but having pipelines within the project details page of Data Science Projects allows for efficiency and reduces cognitive fatigue.
Example of the pipelines section of the project details page when a pipeline server is not configured.
Example of pipelines in the same state but in Pipelines of Data Science Pipelines section in the left nav.
Modal
A modal is presented to the user to configure a pipeline.
Example of modal when configuring pipeline server.
Importing a pipeline
Once a pipeline is configured, users can import a pipeline to their project. Once the user clicks on import a pipeline, they will be presented with a modal. Users can name the pipeline and upload the file.
Example of the modal with a form when importing a pipeline.
Creating a run
Empty state
Once pipeline configuration is complete, the user needs to create a run to begin testing a pipeline/experiment. During this process, the user can find details such as start and finish time or triggered information.
An empty state to create a run can be found in the Runs section of the left nav.
Example of empty state to create run.
Form
Once the user clicks ‘Create run’, a form is presented to allow the user to choose options for the run.
Example of create run form.
Viewing and managing pipelines
Data list
Once a pipeline is configured and imported, the user can view the pipelines created in the Data Science Pipelines > Pipelines section of the left nav.
Example of pipeline view including run status information.
Topology
Details of the run are shown as well as a dropdown to allow the user to have control over the run. A zoom or minimize action bar can be seen on the bottom left of the run as the pipelines can get quite large with changing variables. It also has a bottom collapsible drawer which shows the details and run output. If a user clicks on the step in the pipeline another right-side drawer shows the details for that step.

Example of the pipelines run page.
Runs can be scheduled for a future time either recurring once or triggered immediately.
Example of Scheduled and Triggered runs page.