Give AlbumentationsX a star on GitHub — it powers this leaderboard

Star on GitHub

sagemaker-studio

Python library to interact with Amazon SageMaker Unified Studio

Downloads: 0 (30 days)

Description

SageMaker Studio

SageMaker Studio is an open source library for interacting with Amazon SageMaker Unified Studio resources. With the library, you can access these resources such as domains, projects, connections, and databases, all in one place with minimal code.

Table of Contents

  1. Installation
  2. Usage
    1. Setting Up Credentials and ClientConfig
      1. Using ClientConfig
    2. Domain 3. Domain Properties
    3. Project
      1. Properties
        1. IAM Role ARN
        2. KMS Key ARN
        3. MLflow Tracking Server ARN
        4. S3 Path
      2. Connections
        1. Connection Data
        2. Secrets
      3. Catalogs
      4. Databases and Tables
        1. Databases
        2. Tables
    4. Utils Methods
      1. SQL Utilities
      2. DataFrame Utils
      3. Spark Utilities
    5. Execution APIs
      1. Local Execution APIs
        1. StartExecution API
        2. GetExecution API
        3. ListExecutions API
        4. StopExecution API
      2. Remote Execution APIs
        1. StartExecution API
        2. GetExecution API
        3. ListExecutions API
        4. StopExecution API

1) Installation

The SageMaker Studio is built to PyPI, and the latest version of the library can be installed using the following command:

pip install sagemaker-studio

Supported Python Versions

SageMaker Studio supports Python versions 3.10 and newer.

Licensing

SageMaker Studio is licensed under the Apache 2.0 License. It is copyright Amazon.com, Inc. or its affiliates. All Rights Reserved. The license is available at: http://aws.amazon.com/apache2.0/

2) Usage

Setting Up Credentials and ClientConfig

If SageMaker Studio is being used within Amazon SageMaker Unified Studio JupyterLab, the library will automatically pull your latest credentials from the environment.

If you are using the library elsewhere, or if you want to use different credentials within the SageMaker Unified Studio JupyterLab, you will need to first retrieve your SageMaker Unified Studio credentials and make them available in the environment through either:

  1. Storing them within an AWS named profile. If using a profile name other than default, you will need to supply the profile name by:
    1. Supplying it during initialization of the SageMaker Studio ClientConfig object
    2. Setting the AWS profile name as an environment variable (e.g. export AWS_PROFILE="my_profile_name")
  2. Initializing a boto3 Session object and supplying it when initializing a SageMaker Studio ClientConfig object
AWS Named Profile

To use the AWS named profile, you can update your AWS config file with your profile name and any other settings you would like to use:

[my_profile_name]
region = us-east-1

Your credentials file should have the credentials stored for your profile:

[my_profile_name]
aws_access_key_id=AKIAIOSFODNN7EXAMPLE
aws_secret_access_key=wJalrXUtnFEMI/K7MDENG/bPxRfiCYEXAMPLEKEY
aws_session_token=IQoJb3JpZ2luX2IQoJb3JpZ2luX2IQoJb3JpZ2luX2IQoJb3JpZ2luX2IQoJb3JpZVERYLONGSTRINGEXAMPLE

Finally, you can pass in the profile when initializing the ClientConfig object.

from sagemaker_studio import ClientConfig

conf = ClientConfig(profile_name="my_profile_name")

You can also set the profile name as an environment variable:

export AWS_PROFILE="my_profile_name"
Boto3 Session

To use a boto3 Session object for credentials, you will need to initialize the Session and supply it to ClientConfig.

from boto3 import Session
from sagemaker_studio import ClientConfig

my_session = Session(...)
conf = ClientConfig(session=my_session)

Using ClientConfig

If using ClientConfig for supplying credentials or changing the AWS region name, the ClientConfig object will need to be supplied when initializing any further SageMaker Studio objects, such as Domain or Project. If using non prod endpoint for an AWS service, it can also be supplied in the ClientConfig. Note: In sagemaker space, datazone endpoint is by default fetched from the metadata json file.

from sagemaker_studio import ClientConfig, Project

conf = ClientConfig(region="eu-west-1")
proj = Project(config=conf)

Domain

Domain can be initialized as follows.

from sagemaker_studio import Domain

dom = Domain()

If you are not using the SageMaker Studio within SageMaker Unified Studio Jupyter Lab, you will need to provide the ID of the domain you want to use.

dom = Domain(id="123456")

Domain Properties

A Domain object has several string properties that can provide information about the domain that you are using.

dom.id
dom.root_domain_unit_id
dom.name
dom.domain_execution_role
dom.status
dom.portal_url

Project

Project can be initialized as follows.

from sagemaker_studio import Project

proj = Project()

If you are not using the SageMaker Studio within the SageMaker Unified Studio Jupyter Lab or Data Notebook, you will need to provide either the ID or name of the project you would like to use and the domain ID of the project.

proj = Project(name="my_proj_name", domain_id="123456")

Project Properties

A Project object has several string properties that can provide information about the project that you are using.

proj.id
proj.name
proj.domain_id,
proj.project_status,
proj.domain_unit_id,
proj.project_profile_id
proj.user_id
IAM Role ARN

To retrieve the project IAM role ARN, you can retrieve the iam_role field. This gets the IAM role ARN of the default IAM connection within your project.

proj.iam_role
KMS Key ARN

If you are using a KMS key within your project, you can retrieve the kms_key_arn field.

proj.kms_key_arn

MLflow Tracking Server ARN

If you are using an MLflow tracking server within your project, you can retrieve the mlflow_tracking_server_arn field.

Usage

proj.mlflow_tracking_server_arn
S3 Path

One of the properties of a Project is s3. You can access various S3 paths that exist within your project.

# S3 path of project root directory
proj.s3.root

# S3 path of datalake consumer Glue DB directory (requires DataLake environment)
proj.s3.datalake_consumer_glue_db

# S3 path of Athena workgroup directory (requires DataLake environment)
proj.s3.datalake_athena_workgroup

# S3 path of workflows output directory (requires Workflows environment)
proj.s3.workflow_output_directory

# S3 path of workflows temp storage directory (requires Workflows environment)
proj.s3.workflow_temp_storage

# S3 path of EMR EC2 log destination directory (requires EMR EC2 environment)
proj.s3.emr_ec2_log_destination

# S3 path of EMR EC2 log bootstrap directory (requires EMR EC2 environment)
proj.s3.emr_ec2_certificates

# S3 path of EMR EC2 log bootstrap directory (requires EMR EC2 environment)
proj.s3.emr_ec2_log_bootstrap
Other Environment S3 Paths

You can also access the S3 path of a different environment by providing an environment ID.

proj.s3.environment_path(environment_id="env_1234")

Connections

You can retrieve a list of connections for a project, or you can retrieve a single connection by providing its name. If no name is passed, it refers to project's default IAM connection.

proj_connections: List[Connection] = proj.connections
proj_iam_conn = proj.connection()
proj_redshift_conn = proj.connection("<my_redshift_connection_name>")

Each Connection object has several properties that can provide information about the connection.

proj_redshift_conn.name
proj_redshift_conn.id
proj_redshift_conn.physical_endpoints[0].host
proj_redshift_conn.iam_role
Connection Data

To retrieve all properties of a Connection, you can access the data field to get a ConnectionData object. ConnectionData fields can be accessed using the dot notation (e.g. conn_data.top_level_field). For retrieving further nested data within ConnectionData, you can access it as a dictionary. (e.g. conn_data.top_level_field["nested_field"]).

conn_data: ConnectionData = proj_redshift_conn.data
red_temp_dir = conn_data.redshiftTempDir
lineage_sync = conn_data.lineageSync
lineage_job_id = lineage_sync["lineageJobId"]
spark_conn = proj.connection("<my_spark_glue_connection_name>")
id = spark_conn.id
env_id = spark_conn.environment_id
glue_conn = spark_conn.data.glue_connection_name
workers = spark_conn.data.number_of_workers
glue_version = spark_conn.data.glue_version
# Fetching tracking server ARN and tracking server name from an MLFlow connection
ml_flow_conn = proj.connection('<my_ml_flow_connection_name>')
tracking_server_arn = ml_flow_conn.data.tracking_server_arn
tracking_server_name = ml_flow_conn.data.tracking_server_name

Catalogs

If your Connection is of the LAKEHOUSE or IAM type, you can retrieve a list of catalogs, or a single catalog by pr