You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
This sample demonstrates how to start jobs using your own script, packaged in a SageMaker-compatible container, using the Amazon AWS Controllers for Kubernetes (ACK) service controller for Amazon SageMaker.
4
+
5
+
## Prerequisites
6
+
7
+
This sample assumes that you have already configured an Kubernetes cluster with the ACK operator. It also assumes that you have installed `kubectl` - you can find a link on our [installation page](To do).
8
+
9
+
You will also need an IAM role which has permissions to access your S3 resources and SageMaker. If you have not yet created a role with these permissions, you can find an example policy at [Amazon SageMaker Roles](https://docs.aws.amazon.com/sagemaker/latest/dg/sagemaker-roles.html#sagemaker-roles-createtrainingjob-perms).
10
+
11
+
### Creating your first Job
12
+
13
+
The easiest way to start is taking a look at the sample training jobs and its corresponding [README](/samples/training/README.md)
This sample demonstrates how to start batch transform jobs using your own batch-transform script, packaged in a SageMaker-compatible container, using the Amazon AWS Controllers for Kubernetes (ACK) service controller for Amazon SageMaker.
4
+
5
+
## Prerequisites
6
+
7
+
This sample assumes that you have already configured an Kubernetes cluster with the ACK operator. It also assumes that you have installed `kubectl` - you can find a link on our [installation page](To do).
8
+
9
+
You will also need a model in SageMaker for this sample. If you do not have one you must first create a [model](/samples/model/README.md).
10
+
11
+
### Updating the Batch Transform Job Specification
12
+
13
+
In the `my-batch-transform-job.yaml` file, modify the placeholder values with those associated with your account and batchtransform job.
14
+
15
+
## Submitting your Batch Transform Job
16
+
17
+
### Create a Batch Transform Job
18
+
19
+
To submit your prepared batch transform job specification, apply the specification to your Kubernetes cluster as such:
20
+
```
21
+
$ kubectl apply -f my-batch-transform-job.yaml
22
+
batch-transformjob.sagemaker.services.k8s.aws.amazon.com/my-batch-transform-job created
23
+
```
24
+
25
+
### List Batch Transform Jobs
26
+
27
+
To list all Batch Transform Jobs created using the ACK controller use the following command:
28
+
```
29
+
$ kubectl get batch-transformjob
30
+
```
31
+
32
+
### Describe a Batch Transform Job
33
+
34
+
To get more details about the Batch Transform Job once it's submitted, like checking the status, errors or parameters of the Batch Transform Job use the following command:
This sample demonstrates how to create Endpoints using your own Endpoint_base/config script, packaged in a SageMaker-compatible container, using the Amazon AWS Controllers for Kubernetes (ACK) service controller for Amazon SageMaker.
4
+
5
+
## Prerequisites
6
+
7
+
This sample assumes that you have already configured an Kubernetes cluster with the ACK operator. It also assumes that you have installed `kubectl` - you can find a link on our [installation page](To do).
8
+
9
+
You will also need a model in SageMaker for this sample. If you do not have one you must first create a [model](/samples/model/README.md)
10
+
11
+
In order to run [endpoint_base](/samples/endpoint/endpoint_base.yaml) you will need an endpoint_config which can be created by [endpoint_config](/samples/endpoint/endpoint_config.yaml)
12
+
13
+
### Updating the Endpoint Specification
14
+
15
+
In the `endpoint_config.yaml` file, modify the placeholder values with those associated with your account. The `spec.productionVariants.ModelName` should be the SageMaker model from the previous step.
16
+
17
+
## Submitting your Endpoint Specification
18
+
19
+
### Create an Endpoint Config and Endpoint
20
+
21
+
To submit your prepared endpoint specification, apply the specification to your Kubernetes cluster as such:
22
+
```
23
+
$ kubectl apply -f my-endpoint.yaml
24
+
endpoints.sagemaker.services.k8s.aws.amazon.com/my-endpoint created
25
+
```
26
+
If it is a endpoint config:
27
+
```
28
+
$ kubectl apply -f my-endpoint-config.yaml
29
+
endpointsconfigs.sagemaker.services.k8s.aws /my-endpoint-config created
30
+
```
31
+
32
+
### List Endpoint Configs and Endpoints
33
+
34
+
To list all Endpoints created using the ACK controller use the following command:
35
+
```
36
+
$ kubectl get endpoints.sagemaker.services.k8s.aws
37
+
```
38
+
If it is a endpoint config it is endpointsconfigs.sagemaker.services.k8s.aws
39
+
```
40
+
$ kubectl get endpointsconfigs.sagemaker.services.k8s.aws
41
+
```
42
+
43
+
### Describe an Endpoint Config and Endpoint
44
+
45
+
To get more details about the Endpoint once it's submitted, like checking the status, errors or parameters of the Endpoint use the following command:
This sample demonstrates how to start hyperparameter jobs using your own hyperparameter script, packaged in a SageMaker-compatible container, using the Amazon AWS Controllers for Kubernetes (ACK) service controller for Amazon SageMaker.
4
+
5
+
## Prerequisites
6
+
7
+
This sample assumes that you have already configured an Kubernetes cluster with the ACK operator. It also assumes that you have installed `kubectl` - you can find a link on our [installation page](To do).
8
+
9
+
In order to follow this script, you must first create a hyperparameter script packaged in a Dockerfile that is [compatible with Amazon SageMaker](https://docs.aws.amazon.com/sagemaker/latest/dg/amazon-sagemaker-containers.html). Here is a list of available [containers](https://github.com/aws/deep-learning-containers/blob/master/available_images.md)
10
+
11
+
### Get an Image
12
+
13
+
All SageMaker Hyperparameter jobs are run from within a container with all necessary dependencies and modules pre-installed and with the hyperparameter scripts referencing the acceptable input and output directories. Sample container images are [available](https://docs.aws.amazon.com/sagemaker/latest/dg/sagemaker-algo-docker-registry-paths.html).
14
+
15
+
A container image URL and tag looks has the following structure:
In the `my-hyperparameter-job.yaml` file, modify the placeholder values with those associated with your account and hyperparameter job.
23
+
24
+
### Enabling Spot Training
25
+
In the `my-hyperparameter-job.yaml` file under `spec.trainingJobDefinition` add `enableManagedSpotTraining` and set the value to true. You will also need to specify a `spec.trainingJobDefinition.stoppingCondition.maxRuntimeInSeconds` and `spec.trainingJobDefinition.stoppingCondition.maxWaittimeInSeconds`
26
+
27
+
## Submitting your Hyperparameter Job
28
+
29
+
### Create a Hyperparameter Job
30
+
31
+
To submit your prepared hyperparameter job specification, apply the specification to your Kubernetes cluster as such:
32
+
```
33
+
$ kubectl apply -f my-hyperparameter-job.yaml
34
+
hyperparametertuningjob.sagemaker.services.k8s.aws.amazon.com/my-hyperparameter-job created
35
+
```
36
+
37
+
### List Hyperparameter Jobs
38
+
39
+
To list all Hyperparameter jobs created using the ACK controller use the following command:
40
+
```
41
+
$ kubectl get hyperparametertuningjob
42
+
```
43
+
44
+
### Describe a Hyperparameter Job
45
+
46
+
To get more details about the Hyperparameter job once it's submitted, like checking the status, errors or parameters of the Hyperparameter job use the following command:
This sample demonstrates how to start data quality job definitions using your own data-quality-job-definitions script, packaged in a SageMaker-compatible container, using the Amazon AWS Controllers for Kubernetes (ACK) service controller for Amazon SageMaker.
4
+
5
+
## Prerequisites
6
+
7
+
This sample assumes that you have already configured an Kubernetes cluster with the ACK operator. It also assumes that you have installed `kubectl` - you can find a link on our [installation page](To do).
8
+
9
+
You will need an [Endpoint](/samples/endpoint/README.md) configured in SageMaker and you will need to run a baselining job to generate baseline statistics and constraints.
10
+
11
+
### Get an Image
12
+
13
+
All SageMaker data quality job definitions are run from within a container with all necessary dependencies and modules pre-installed and with the data-quality scripts referencing the acceptable input and output directories. Sample container images are [available](https://docs.aws.amazon.com/sagemaker/latest/dg/sagemaker-algo-docker-registry-paths.html).
14
+
15
+
A container image URL and tag looks has the following structure:
dataqualityjobdefinitions.sagemaker.services.k8s.aws.amazon.com/my-data-quality-job-definition created
32
+
```
33
+
34
+
### List Data Quality Job Definitions
35
+
36
+
To monitor the data quality job definition status, you can use the following command:
37
+
```
38
+
$ kubectl get dataqualityjobdefinitions
39
+
```
40
+
41
+
### Describe a Data Quality Job Definition
42
+
43
+
To get more details about the Data Quality Job Definition once it's submitted, like checking the status, errors or parameters of the Data Quality Job Definition use the following command:
0 commit comments