Deploying SQL Server 2019 container on RHEL 8 with podman
Deploying SQL Server 2019 container on RHEL 8 with podman
Having a fresh install of RHEL8 on my lab environment, I was curious to take a look at new containerization stuff from Red Hat in the context of SQL Server 2019. Good chances are the future version of SQL Server should be available and supported on with the latest version of Red Hat but for now this blog post is purely experimental. This time I wanted to share with you some thoughts about the new Podman command.
First of all, we should be aware that since RHEL8 Red Hat decided to replace docker with CRI-O/podman in order to provide a “daemonless” container world and especially for Kubernetes. By 2016, Kubernetes project introduced the Container Runtime Interface (CRI). Basically, with CRI, Kubernetes can be container runtime-agnostic. CRI-O that is an open source project initiated by Red Hat the same year that gives the ability to run containers directly from Kubernetes without any unnecessary code or tooling as long as the container remains OCI-compliant. Because Docker is not implemented anymore (and officially not supported) by Red Hat since RHEL8, we need a client tool for working with containers and this is where Podman steps in. To cut the story short, Podman implements almost all the Docker CLI commands and more.
So, let’s have an overview of Podman commands through the installation of a SQL Server 2019 based container. It is worth noting that Podman is not intended to be used in the context of a “standalone” container environnement and should be used with an container orchestrator like K8s or an orchestration platform like OpenShift. That said, let’s first create a host directory to persist the SQL Server database files.
1 2 | $ sudo mkdir -p /var/mssql/data $ sudo chmod 755 -R /var/mssql/data |
Then let’s download the SQL Server 2019 RHEL image. We will use the following Podman command:
1 2 3 4 5 6 7 8 9 10 | $ sudo podman pull mcr.microsoft.com /mssql/rhel/server :2019-CTP3.1 Trying to pull mcr.microsoft.com /mssql/rhel/server :2019-CTP3.1...Getting image source signatures Copying blob 079e961eee89: 70.54 MiB / 70.54 MiB [========================] 1m3s Copying blob 1b493d38a6d3: 1.20 KiB / 1.20 KiB [==========================] 1m3s Copying blob 89e62e5b4261: 333.24 MiB / 333.24 MiB [======================] 1m3s Copying blob d39017c722a8: 174.82 MiB / 174.82 MiB [======================] 1m3s Copying config dbba412361d7: 4.98 KiB / 4.98 KiB [==========================] 0s Writing manifest to image destination Storing signatures dbba412361d7ca4fa426387e1d6fc3ec85e37d630bfe70e6599b5116d392394d |
Note that if you’re already comfortable with the Docker commands, the shift to Podman will be easy thanks to the similarity between the both tools. To get information of the new fresh image, we will use the following Podman command:
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 | $ sudo podman images REPOSITORY TAG IMAGE ID CREATED SIZE mcr.microsoft.com /mssql/rhel/server 2019-CTP3.1 dbba412361d7 3 weeks ago 1.79 GB $ sudo podman inspect dbba … "GraphDriver" : { "Name" : "overlay" , "Data" : { "LowerDir" : "/var/lib/containers/storage/overlay/b2769e971a1bdb62f1c0fd9dcc0e9fe727dca83f52812abd34173b49ae55e37d/diff:/var/lib/containers/storage/overlay/4b0cbf0d9d0ff230916734a790f47ab2adba69db44a79c8eac4c814ff4183c6d/diff:/var/lib/containers/storage/overlay/9197342671da8b555f200e47df101da5b7e38f6d9573b10bd3295ca9e5c0ae28/diff" , "MergedDir" : "/var/lib/containers/storage/overlay/b372c0d6ff718d2d182af4639870dc6e4247f684d81a8b2dc2649f8517b9fc53/merged" , "UpperDir" : "/var/lib/containers/storage/overlay/b372c0d6ff718d2d182af4639870dc6e4247f684d81a8b2dc2649f8517b9fc53/diff" , "WorkDir" : "/var/lib/containers/storage/overlay/b372c0d6ff718d2d182af4639870dc6e4247f684d81a8b2dc2649f8517b9fc53/work" } }, … |
As show above, Podman uses the CRI-O back-end store directory with the /var/lib/containers path, instead of using the Docker default storage location (/var/lib/docker).
Go ahead and let’s take a look at the Podman info command:
1 2 3 4 5 6 7 8 9 10 11 12 | $ podman info … OCIRuntime: package: runc-1.0.0-54.rc5.dev.git2abd837.module+el8+2769+577ad176.x86_64 path: /usr/bin/runc version: 'runc version spec: 1.0.0' … store: ConfigFile: /home/clustadmin/ .config /containers/storage .conf ContainerStore: number: 0 GraphDriverName: overlay |
The same kind of information is provided by the Docker info command including the runtime and the graph driver name that is overlay in my case. Generally speaking, creating and getting information of a container with Podman is pretty similar to what we may use with the usual Docker commands. Here for instance the command to spin up a SQL Server container based on the RHEL image:
1 2 3 4 5 6 7 8 9 10 | $ sudo podman run -d -e 'ACCEPT_EULA=Y' -e \ > 'MSSQL_SA_PASSWORD=Password1' \ > --name 'sqltest' \ > -p 1460:1433 \ > - v /var/mssql/data : /var/opt/mssql/data :Z \ > mcr.microsoft.com /mssql/rhel/server :2019-CTP3.1 4f5128d36e44b1f55d23e38cbf8819041f84592008d0ebb2b24ff59065314aa4 $ sudo podman ps CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES 4f5128d36e44 mcr.microsoft.com /mssql/rhel/server :2019-CTP3.1 /opt/mssql/bin/sq ... 4 seconds ago Up 3 seconds ago 0.0.0.0:1460->1433 /tcp sqltest |
Here comes the interesting part. Looking at the pstree output we may notice that there is not dependencies with any (docker) daemon with CRI-O implementation. Usually with the Docker implementation we retrieve the containerd daemon and the related shim for the process within the tree.
1 2 3 4 5 | $ pstree systemd─┬─NetworkManager───2*[{NetworkManager}] ├─… ├─conmon─┬─sqlservr─┬─sqlservr───138*[{sqlservr}] │ │ └─{sqlservr} |
By using the runc command below, we may notice the MSSQL container (identified by the ID here) is actually running through CRI-O and runc runtime.
1 2 | $ sudo runc list -q 4f5128d36e44b1f55d23e38cbf8819041f84592008d0ebb2b24ff59065314aa4 |
Let’s have a look at the existing namespace. The 9449 PID corresponds to the SQL Server process running in isolation mode through Linux namespaces.
1 2 3 4 5 6 7 8 9 10 11 12 | $ sudo lsns … 4026532116 net 2 9449 root /opt/mssql/bin/sqlservr 4026532187 mnt 2 9449 root /opt/mssql/bin/sqlservr 4026532188 uts 2 9449 root /opt/mssql/bin/sqlservr 4026532189 ipc 2 9449 root /opt/mssql/bin/sqlservr 4026532190 pid 2 9449 root /opt/mssql/bin/sqlservr $ ps aux | grep sqlservr root 9449 0.1 0.6 152072 25336 ? Ssl 05:08 0:00 /opt/mssql/bin/sqlservr root 9465 5.9 18.9 9012096 724648 ? Sl 05:08 0:20 /opt/mssql/bin/sqlservr clustad+ 9712 0.0 0.0 12112 1064 pts /0 S+ 05:14 0:00 grep --color=auto sqlservr |
We can double check that the process belongs to the SQL Server container by using the nsenter command:
1 2 3 4 5 6 7 | sudo nsenter -t 17182 -- mount --uts --ipc --net --pid sh sh-4.2 # ps aux USER PID %CPU %MEM VSZ RSS TTY STAT START TIME COMMAND root 1 0.0 0.7 152076 28044 ? Ssl Jul23 0:00 /opt/mssql/bin/sqlservr root 9 2.2 19.7 9034224 754820 ? Sl Jul23 0:28 /opt/mssql/bin/sqlservr root 319 0.0 0.0 13908 3400 ? S 00:01 0:00 sh root 326 0.0 0.1 53832 3900 ? R+ 00:02 0:00 ps aux |
Well, we used different Podman commands to spin up a container that meets the OCI specification like Docker. For a sake of curiosity, let’s build a custom image from a Dockerfile. In fact, this is a custom image we developed for customers to meet our best practices requirements.
1 2 3 4 5 6 7 8 9 10 11 12 13 | $ ls -l total 40 drwxrwxr-x. 2 clustadmin clustadmin 70 Jul 24 02:06 BestPractices drwxrwxr-x. 2 clustadmin clustadmin 80 Jul 24 02:06 DMK -rw-rw-r--. 1 clustadmin clustadmin 614 Jul 24 02:06 docker-compose.yml -rw-rw-r--. 1 clustadmin clustadmin 2509 Jul 24 02:06 Dockerfile -rw-rw-r--. 1 clustadmin clustadmin 3723 Jul 24 02:06 entrypoint.sh -rw-rw-r--. 1 clustadmin clustadmin 1364 Jul 24 02:06 example.docker-swarm-compose.yml -rw-rw-r--. 1 clustadmin clustadmin 504 Jul 24 02:06 healthcheck.sh -rw-rw-r--. 1 clustadmin clustadmin 86 Jul 24 02:06 mssql.conf -rw-rw-r--. 1 clustadmin clustadmin 4497 Jul 24 02:06 postconfig.sh -rw-rw-r--. 1 clustadmin clustadmin 2528 Jul 24 02:06 Readme.md drwxrwxr-x. 2 clustadmin clustadmin 92 Jul 24 02:06 scripts |
To build an image from a Dockerfile the corresponding Podman command is as follow:
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 | $ sudo podman build -t dbi_mssql_linux:2019-CTP3.1 . … --> 5db120fba51f3adc7482ec7a9fed5cc4194f13e97b855d9439a1386096797c39 STEP 65: FROM 5db120fba51f3adc7482ec7a9fed5cc4194f13e97b855d9439a1386096797c39 STEP 66: EXPOSE ${MSSQL_TCP_PORT} --> 8b5e8234af47adb26f80d64abe46715637bd48290b4a6d7711ddf55c393cd5a8 STEP 67: FROM 8b5e8234af47adb26f80d64abe46715637bd48290b4a6d7711ddf55c393cd5a8 STEP 68: ENTRYPOINT [ "/usr/local/bin/entrypoint.sh" ] --> 11045806b8af7cf2f67e5a279692e6c9e25212105bcd104ed17b235cdaea97fe STEP 69: FROM 11045806b8af7cf2f67e5a279692e6c9e25212105bcd104ed17b235cdaea97fe STEP 70: CMD [ "tail -f /dev/null" ] --> bcb8c26d503010eb3e5d72da4b8065aa76aff5d35fac4d7958324ac3d97d5489 STEP 71: FROM bcb8c26d503010eb3e5d72da4b8065aa76aff5d35fac4d7958324ac3d97d5489 STEP 72: HEALTHCHECK --interval=15s CMD [ "/usr/local/bin/healthcheck.sh" ] --> e7eedf0576f73c95b19adf51c49459b00449da497cf7ae417e597dd39a9e4c8f STEP 73: COMMIT dbi_mssql_linux:2019-CTP3.1 |
The image built is now available in the local repository:
1 2 3 4 | $ sudo podman images REPOSITORY TAG IMAGE ID CREATED SIZE localhost /dbi_mssql_linux 2019-CTP3.1 e7eedf0576f7 2 minutes ago 1.79 GB mcr.microsoft.com /mssql/rhel/server 2019-CTP3.1 dbba412361d7 3 weeks ago 1.79 GB |
The next step will consist in spinning up a SQL Server container based on this new image. Note that I used a custom parameter DMK=Y to drive the creation of the DMK maintenance tool in our case which including the deployment of a custom dbi_tools database ans related objects that carry out the database maintenance.
1 2 3 4 5 6 | $ sudo podman run -d -e 'ACCEPT_EULA=Y' \ > -e 'MSSQL_SA_PASSWORD=Password1' -e 'DMK=Y' \ > --name 'sqltest2' \ > -p 1470:1433 \ > localhost /dbi_mssql_linux :2019-CTP3.1 d057e0ca41f08a948de4206e9aa07b53450c2830590f2429e50458681d230f6b |
Let’s check if the dbi_tools has been created during the container runtime phase:
1 2 3 4 5 6 7 8 | $ sudo podman exec -ti d057 /opt/mssql-tools/bin/sqlcmd -S localhost -U sa -P Password1 -Q "SELECT name from sys.databases" name -------------------------------------------------------------------------------------------------------------------------------- master tempdb model msdb dbi_tools |
Finally, to make the transition with a future blog post, the Podman tool comes with extra commands (under development) that is not available with Docker CLI. The following example generates a YAML deployment file and the corresponding service from an existing container. Please note however that containers with volumes are not supported yet.
The container definition is a follows:
1 2 3 4 5 6 | $ sudo podman run -d -e 'ACCEPT_EULA=Y' -e \ 'MSSQL_SA_PASSWORD=Password1' \ --name 'sqltestwithnovolumes' \ -p 1480:1433 \ mcr.microsoft.com /mssql/rhel/server :2019-CTP3.1 7e99581eaec4c91d7c13af4525bfb3805d5b56e675fdb53d0061c231294cd442 |
And we get the corresponding YAML file generated by the Podman command:
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 | $ sudo podman generate kube -s 7e99 # Generation of Kubernetes YAML is still under development! # # Save the output of this file and use kubectl create -f to import # it into Kubernetes. # # Created with podman-1.0.2-dev apiVersion: v1 kind: Pod metadata: creationTimestamp: 2019-07-24T03:52:18Z labels: app: sqltestwithnovolumes name: sqltestwithnovolumes spec: containers: - command: - /opt/mssql/bin/sqlservr env: - name: PATH value: /usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin - name: TERM value: xterm - name: HOSTNAME - name: container value: oci - name: ACCEPT_EULA value: "Y" - name: MSSQL_SA_PASSWORD value: Password1 image: mcr.microsoft.com/mssql/rhel/server:2019-CTP3.1 name: sqltestwithnovolumes ports: - containerPort: 1433 hostPort: 1480 protocol: TCP resources: {} securityContext: allowPrivilegeEscalation: true capabilities: {} privileged: false readOnlyRootFilesystem: false workingDir: / status: {} --- apiVersion: v1 kind: Service metadata: creationTimestamp: 2019-07-24T03:52:18Z labels: app: sqltestwithnovolumes name: sqltestwithnovolumes spec: ports: - name: "1433" nodePort: 30309 port: 1433 protocol: TCP targetPort: 0 selector: app: sqltestwithnovolumes type: NodePort status: loadBalancer: {} |
By default the service type NodePort has been created by the command. This latest command needs further testing for sure!
See you
目录 返回
首页