Skip to content
Open
Show file tree
Hide file tree
Changes from 3 commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
138 changes: 138 additions & 0 deletions getting-started/ceph/README.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,138 @@
<!--
Licensed to the Apache Software Foundation (ASF) under one
or more contributor license agreements. See the NOTICE file
distributed with this work for additional information
regarding copyright ownership. The ASF licenses this file
to you under the Apache License, Version 2.0 (the
"License"); you may not use this file except in compliance
with the License. You may obtain a copy of the License at

http://www.apache.org/licenses/LICENSE-2.0

Unless required by applicable law or agreed to in writing,
software distributed under the License is distributed on an
"AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
KIND, either express or implied. See the License for the
specific language governing permissions and limitations
under the License.
-->

# Getting Started with Apache Polaris and Ceph

## Overview

This guide describes how to spin up a **single-node Ceph cluster** with **RADOS Gateway (RGW)** for S3-compatible storage and configure it for use by **Polaris**.

This example cluster is configured for basic access key authentication only.
It does not include STS (Security Token Service) or temporary credentials.
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Would you mind adding a getting-started with IAM/STS as a follow-up of this PR?

All access to the Ceph RGW (RADOS Gateway) and Polaris integration uses static S3-style credentials (as configured via radosgw-admin user create).

Spark is used as a query engine. This example assumes a local Spark installation.
See the [Spark Notebooks Example](../spark/README.md) for a more advanced Spark setup.

## Starting the Example

The services are started **in sequence**:
1. Monitor + Manager
2. OSD
3. RGW
4. Polaris

Note: this example pulls the `apache/polaris:latest` image, but assumes the image is `1.2.0-incubating` or later.


### 1. Start monitor and manager
```shell
docker-compose up -d mon1 mgr
```

### 2. Start OSD
```shell
docker-compose up -d osd1
```

### 3. Start RGW
```shell
docker-compose up -d rgw1
```
#### Check status
```shell
docker exec -it cephpolaris-mon1-1 ceph -s
```
You should see something like:
```yaml
cluster:
id: b2f59c4b-5f14-4f8c-a9b7-3b7998c76a0e
health: HEALTH_WARN
mon is allowing insecure global_id reclaim
1 monitors have not enabled msgr2
6 pool(s) have no replicas configured

services:
mon: 1 daemons, quorum mon1 (age 49m)
mgr: mgr(active, since 94m)
osd: 1 osds: 1 up (since 36m), 1 in (since 93m)
rgw: 1 daemon active (1 hosts, 1 zones)
```

### 4. Create bucket for Polaris storage
```shell
docker-compose up -d setup_bucket
```

### 5. Run Polaris service
```shell
docker-compose up -d polaris
```

### 6. Setup polaris catalog
```shell
docker-compose up -d polaris-setup
```

## Connecting From Spark

```shell
bin/spark-sql \
--packages org.apache.iceberg:iceberg-spark-runtime-3.5_2.12:1.9.0,org.apache.iceberg:iceberg-aws-bundle:1.9.0 \
--conf spark.sql.extensions=org.apache.iceberg.spark.extensions.IcebergSparkSessionExtensions \
--conf spark.sql.catalog.polaris=org.apache.iceberg.spark.SparkCatalog \
--conf spark.sql.catalog.polaris.type=rest \
--conf spark.sql.catalog.polaris.uri=http://localhost:8181/api/catalog \
--conf spark.sql.catalog.polaris.token-refresh-enabled=false \
--conf spark.sql.catalog.polaris.warehouse=quickstart_catalog \
--conf spark.sql.catalog.polaris.scope=PRINCIPAL_ROLE:ALL \
--conf spark.sql.catalog.polaris.credential=root:s3cr3t \
--conf spark.sql.catalog.polaris.client.region=irrelevant
```

Note: `s3cr3t` is defined as the password for the `root` user in the `docker-compose.yml` file.

Note: The `client.region` configuration is required for the AWS S3 client to work, but it is not used in this example
since Ceph does not require a specific region.

## Running Queries

Run inside the Spark SQL shell:

```
spark-sql (default)> use polaris;
Time taken: 0.837 seconds

spark-sql ()> create namespace ns;
Time taken: 0.374 seconds

spark-sql ()> create table ns.t1 as select 'abc';
Time taken: 2.192 seconds

spark-sql ()> select * from ns.t1;
abc
Time taken: 0.579 seconds, Fetched 1 row(s)
```
## Lack of Credential Vending

Notice that the Spark configuration does not contain a `X-Iceberg-Access-Delegation` header.
This is because example cluster does not include STS (Security Token Service) or temporary credentials.

The lack of STS API is represented in the Catalog storage configuration by the
`stsUnavailable=true` property.
29 changes: 29 additions & 0 deletions getting-started/ceph/ceph-conf/ceph.conf
Original file line number Diff line number Diff line change
@@ -0,0 +1,29 @@
[global]
fsid = b2f59c4b-5f14-4f8c-a9b7-3b7998c76a0e
mon_initial_members = mon1
mon_host = 172.18.0.2
public_network = 172.18.0.0/16
cluster_network = 172.18.0.0/16
auth_cluster_required = cephx
auth_service_required = cephx
auth_client_required = cephx
osd_pool_default_size = 1
osd_pool_default_min_size = 1
osd_pool_default_pg_num = 333
osd_crush_chooseleaf_type = 1
mon_allow_pool_size_one= true
# max open files = 655350
# cephx cluster require signatures = false
# cephx service require signatures = false
# osd max object name len = 256
# osd max object namespace len = 64

[mon.mon1]
mon_data = /var/lib/ceph/mon/ceph-mon1
mon_rocksdb_min_wal_logs = 1
mon_rocksdb_max_total_wal_size = 64M
mon_rocksdb_options = max_background_compactions=4;max_background_flushes=2

[client.rgw1]
host = ceph-rgw1
rgw_frontends = civetweb port=7480
227 changes: 227 additions & 0 deletions getting-started/ceph/docker-compose.yml
Original file line number Diff line number Diff line change
@@ -0,0 +1,227 @@
networks:
cluster-net:
driver: bridge

services:

mon1:
image: ${CEPH_CONTAINER_IMAGE}
entrypoint: "/bin/sh"
command:
- "-c"
- >-
set -ex;
mkdir -p /var/lib/ceph/osd/ceph-0;
ceph-authtool --create-keyring /tmp/ceph.mon.keyring --gen-key -n mon. --cap mon 'allow *';
ceph-authtool --create-keyring /etc/ceph/ceph.client.admin.keyring \
--gen-key -n client.admin \
--cap mon 'allow *' --cap osd 'allow *' --cap mgr 'allow *' --cap mds 'allow *';
ceph-authtool --create-keyring /var/lib/ceph/bootstrap-osd/ceph.keyring \
--gen-key -n client.bootstrap-osd \
--cap mon 'profile bootstrap-osd' --cap mgr 'allow r';
ceph-authtool /tmp/ceph.mon.keyring --import-keyring /etc/ceph/ceph.client.admin.keyring;
ceph-authtool /tmp/ceph.mon.keyring --import-keyring /var/lib/ceph/bootstrap-osd/ceph.keyring;
chown ceph:ceph /tmp/ceph.mon.keyring;
monmaptool --create --add mon1 ${MON_IP} --fsid ${FSID} /tmp/monmap --clobber;
sudo -u ceph ceph-mon --mkfs -i mon1 --monmap /tmp/monmap --keyring /tmp/ceph.mon.keyring;
ceph-mon -i mon1 -f -d;
environment:
MON_IP: ${MON_IP}
CEPH_PUBLIC_NETWORK: ${MON1_CEPH_PUBLIC_NETWORK}
FSID: ${FSID}
volumes:
- ./ceph-conf:/etc/ceph
- ./bootstrap-osd:/var/lib/ceph/bootstrap-osd
- ./osd1:/var/lib/ceph/osd/ceph-0/
networks:
- cluster-net

mgr:
image: ${CEPH_CONTAINER_IMAGE}
entrypoint: "/bin/sh"
command:
- "-c"
- >-
set -ex;
mkdir -p /var/lib/ceph/mgr/ceph-mgr;
ceph auth get-or-create mgr.mgr mon 'allow profile mgr' osd 'allow *' mds 'allow *' > /var/lib/ceph/mgr/ceph-mgr/keyring;
ceph-mgr -f -i mgr;
volumes:
- ./ceph-conf:/etc/ceph
depends_on:
- mon1
networks:
- cluster-net
ports:
- ${DASHBOARD_PORT}:${INTERNAL_DASHBOARD_PORT}

osd1:
pid: host
privileged: true
image: ${CEPH_CONTAINER_IMAGE}
environment:
OSD_UUID_1: ${OSD_UUID_1}
entrypoint: "/bin/sh"
command:
- "-c"
- >-
set -ex;
mkdir -p /var/lib/ceph/osd/ceph-0;
chown -R ceph:ceph /var/lib/ceph/osd/ceph-0;
ceph-authtool --create-keyring /var/lib/ceph/osd/ceph-0/keyring \
--gen-key -n osd.0 \
--cap osd 'allow *' \
--cap mon 'allow profile osd';
ceph auth del osd.0 || true;
ceph auth add osd.0 -i /var/lib/ceph/osd/ceph-0/keyring;
ceph osd new ${OSD_UUID_1} -n client.bootstrap-osd -k /var/lib/ceph/bootstrap-osd/ceph.keyring;
ceph-osd -i 0 --mkfs --osd-data /var/lib/ceph/osd/ceph-0 --osd-uuid ${OSD_UUID_1} \
--keyring /var/lib/ceph/osd/ceph-0/keyring;
ceph-osd -f -i 0;
volumes:
- ./ceph-conf:/etc/ceph
- ./bootstrap-osd:/var/lib/ceph/bootstrap-osd
depends_on:
- mon1
networks:
- cluster-net

mds1:
image: ${CEPH_CONTAINER_IMAGE}
entrypoint: "/bin/sh"
command:
- "-c"
- >-
set -ex;
mkdir -p /var/lib/ceph/mds/ceph-admin;
ceph-authtool --create-keyring /var/lib/ceph/mds/ceph-admin/keyring --gen-key -n mds. --cap mds 'allow *';
ceph-mds -f -i admin;
hostname: "ceph-mds1-host"
environment:
CEPHFS_CREATE: 1
volumes:
- ./ceph-conf:/etc/ceph
depends_on:
- osd1
networks:
- cluster-net
rgw1:
image: ${CEPH_CONTAINER_IMAGE}
container_name: rgw1
environment:
MON_IP: ${MON_IP}
CEPH_PUBLIC_NETWORK: ${MON1_CEPH_PUBLIC_NETWORK}
RGW_ACCESS_KEY: ${RGW_ACCESS_KEY}
RGW_SECRET_KEY: ${RGW_SECRET_KEY}
entrypoint: "/bin/sh"
command:
- "-c"
- >-
set -ex;
mkdir -p /var/lib/ceph/radosgw/ceph-rgw1;
ceph auth get-or-create client.rgw1 mon 'allow rw' osd 'allow rwx';
ceph auth caps client.rgw1 mon 'allow rw' osd 'allow rwx';
ceph-authtool --create-keyring /var/lib/ceph/radosgw/ceph-rgw1/keyring --gen-key -n client.rgw1 --cap osd 'allow *' --cap mon 'allow *';
ceph auth del client.rgw1 || true;
ceph auth add client.rgw1 -i /var/lib/ceph/radosgw/ceph-rgw1/keyring;
radosgw-admin user create --uid="polaris-user" \
--display-name="Polaris User" \
--access-key="${RGW_ACCESS_KEY}" \
--secret-key="${RGW_SECRET_KEY}" || true;
echo ">>> RGW user created (access=${RGW_ACCESS_KEY}, secret=${RGW_SECRET_KEY})";
radosgw -n client.rgw1 --rgw-frontends="beast port=7480" --foreground;
ports:
- "7480:7480" # RGW HTTP endpoint (S3)
- "7481:7481"
volumes:
- ./ceph-conf:/etc/ceph
depends_on:
- osd1
networks:
- cluster-net

setup_bucket:
image: peakcom/s5cmd:latest
depends_on:
- rgw1
environment:
AWS_ACCESS_KEY_ID: ${RGW_ACCESS_KEY}
AWS_SECRET_ACCESS_KEY: ${RGW_SECRET_KEY}
S3_ENDPOINT_URL: ${S3_ENDPOINT_URL}
S3_REGION: ${S3_REGION}
S3_POLARIS_BUCKET: ${S3_POLARIS_BUCKET}
entrypoint: "/bin/sh"
command:
- "-c"
- >-
set -ex;
echo ">>> Waiting for RGW to become ready...";
sleep 5;
echo ">>> Create bucket if not exist...";
/s5cmd --endpoint-url ${S3_ENDPOINT_URL} mb s3://${S3_POLARIS_BUCKET} || true;
tail -f /dev/null;
networks:
- cluster-net

polaris:
image: apache/polaris:latest
ports:
# API port
- "8181:8181"
# Optional, allows attaching a debugger to the Polaris JVM
- "5005:5005"
depends_on:
- rgw1
environment:
JAVA_DEBUG: true
JAVA_DEBUG_PORT: "*:5005"
AWS_REGION: us-west-2
AWS_ACCESS_KEY_ID: ${RGW_ACCESS_KEY}
AWS_SECRET_ACCESS_KEY: ${RGW_SECRET_KEY}
POLARIS_BOOTSTRAP_CREDENTIALS: POLARIS,root,s3cr3t
polaris.realm-context.realms: POLARIS
quarkus.otel.sdk.disabled: "true"
healthcheck:
test: ["CMD", "curl", "http://localhost:8182/q/health"]
interval: 2s
timeout: 10s
retries: 10
start_period: 10s
networks:
- cluster-net

polaris-setup:
image: alpine/curl
depends_on:
polaris:
condition: service_healthy
environment:
- CLIENT_ID=root
- CLIENT_SECRET=s3cr3t
volumes:
- ../assets/polaris/:/polaris
entrypoint: "/bin/sh"
command:
- "-c"
- >-
chmod +x /polaris/create-catalog.sh;
chmod +x /polaris/obtain-token.sh;
source /polaris/obtain-token.sh;
echo Creating catalog...;
export STORAGE_CONFIG_INFO='{"storageType":"S3",
"endpoint":"http://rgw1:7480",
"stsUnavailable":"true",
"pathStyleAccess":true}';
export STORAGE_LOCATION='s3://polaris-storage';
/polaris/create-catalog.sh POLARIS $$TOKEN;
echo Extra grants...;
curl -H "Authorization: Bearer $$TOKEN" -H 'Content-Type: application/json' \
-X PUT \
http://polaris:8181/api/management/v1/catalogs/quickstart_catalog/catalog-roles/catalog_admin/grants \
-d '{"type":"catalog", "privilege":"CATALOG_MANAGE_CONTENT"}';
echo Done.;
curl -H "Authorization: Bearer $$TOKEN" -H 'Content-Type: application/json' \
-X GET \
http://polaris:8181/api/management/v1/catalogs;
networks:
- cluster-net