It's No Secret, We Use Okteto

Context
I joined Okteto almost 4 months ago with the regular mindset about what devtools are, how to use them, and what a dev environment is supposed to be.
This consists, mostly, of:
- Installing a bunch of software in your computer
- Following the steps in the README with your fingers crossed
- Running the application (or a subset of it) locally
All this was greatly simplified when Docker came into the picture, allowing you to package your application into containers and easily deploy them with docker compose. This came at the cost of renting out half of your computer resources to Docker though.
Local development is great for simpler applications. However, for bigger or complex systems it’s not trivial to keep the dev setup somewhat close to a production environment leading to "works on my machine" scenarios.
Provisioning a Cluster
For us, the first step to start a development environment is to provision a production cluster. Say what?
At Okteto each developer has their own cluster. This is a cluster exactly like (or very similar to) the ones used by our Enterprise clients.(*) This is much more realistic than using minikube, for eg, where there's still the constrain of a local setup (single node, docker as container runtime, etc) and a single configuration.
Instead, having a production-like cluster allows us to test out and reproduce a scenario in any combination in the provider/version/configuration installation matrix. If, for example, an issue is reported on AWS running Okteto Enterprise version 0.9.2, we would deploy a cluster with that exact configuration on AWS and okteto up
into it.
Installation should be flawless for us, otherwise clients are probably affected. This keeps our internal developer experience aligned with what clients experience.
(*) Clusters are provisioned through terraform and Okteto is released as a Helm chart
Configuring Okteto and Kubernetes
Once our cluster is fully provisioned, operational, and Okteto is installed we use okteto to connect to it.
Okteto is deployed to a cluster in its own namespace so any okteto cli command should work against the internal okteto namespace as it does for any other namespace.
However, since Okteto doesn’t allow access to itself by default we need to grab the correct kubernetes config to be able to access resources in the internal namespace.
We provision our dev clusters through GCP which allows us to do this easily:
$ gcloud container clusters get-credentials devfran --project development-<id>
Fetching cluster endpoint and auth data.
kubeconfig entry generated for devfran.
This creates our privileged kubernetes context which we’ll allow us to run okteto up
for okteto resources.
Putting eveything together
There's two main components in an Okteto installation: api and frontend
API
The api is a Golang GraphQL API. To develop on it, we mount all our dependencies into our dev container and use the privileged context from before.
# api/okteto.yaml
name: api
namespace: okteto
context: ${OKTETO_DEV_CONTEXT}
image: okteto/godev:1
command: bash
volumes:
- /go/
- /root/.cache/
- /root/.vscode-server
sync:
- .:/usr/src/app
persistentVolume:
enabled: true
size: 10Gi
For reference, this is what our godev image looks like:
# okteto/godev:1 development image
FROM okteto/golang:1
# Install air for backend autoreload
RUN curl -sSfL https://raw.githubusercontent.com/cosmtrek/air/master/install.sh | sh -s -- -b /usr/bin
We can now run our api server with autoreload (thanks to air) by simply running okteto up -f api/okteto.yaml
. We can work normally from our local IDE while okteto hot reloads everything immediately into the remote cluster, so we get instant remote autoreloading of our api server.
$ okteto up -f api/okteto.yaml
✓ Images successfully pulled
✓ Files synchronized
Context: development_devfran
Namespace: okteto
Name: api
Welcome to your development container. Happy coding!
okteto:api app> make watch
air
__ _ ___
/ /\ | | | |_)
/_/--\ |_| |_| \_ 1.27.3, built with Go 1.16.3
!exclude assets
!exclude bin
watching cmd
!exclude etc
watching pkg
building...
running...
INFO[0000] Starting api service... instance.id=fran-okteto-enterprise-api
INFO[0000] using no-op tracer instance.id=fran-okteto-enterprise-api
INFO[0000] loading buildkit certificate instance.id=fran-okteto-enterprise-api
INFO[0000] using google as the IDP instance.id=fran-okteto-enterprise-api
INFO[0000] buildkit certificate loaded instance.id=fran-okteto-enterprise-api
INFO[0000] initializing kubernetes config instance.id=fran-okteto-enterprise-api
INFO[0000] ssh public key: ssh-rsa ...
INFO[0000] api server is running at http://0.0.0.0:8080 instance.id=fran-okteto-enterprise-api serverName=api
Frontend
Our frontend is an nginx server that serves the static react application. However, during development we swap the image and use node and webpack to serve the static assets with hot reloading.
# frontend/okteto.yaml
name: frontend
namespace: okteto
context: ${OKTETO_DEV_CONTEXT}
image: okteto/node:12
workdir: /app
command: ["bash"]
Same as above, we run okteto up -f frontend/okteto.yaml
followed by our dev command for remote autoreloading while working locally in our IDE.
$ okteto up -f frontend/okteto.yaml
✓ Persistent volume successfully attached
✓ Images successfully pulled
✓ Files synchronized
Context: development_devfran
Namespace: okteto
Name: frontend
Welcome to your development container. Happy coding!
okteto:frontend app> yarn start
yarn run v1.22.5
$ yarn develop
$ webpack-dev-server --config webpack.dev.config.js
ℹ 「wds」: Project is running at http://0.0.0.0:8080/
ℹ 「wds」: webpack output is served from /
ℹ 「wds」: Content not from webpack is served from /app
ℹ 「wds」: 404s will fallback to index.html
...
ℹ 「wdm」: Compiled successfully.
In both cases, dependencies are mounted into the development containers so installed dependencies should persist across up
s.
Upgrading our cluster
We can install any okteto version in our dev cluster as a Helm chart. Charts are created for our main branch and by default all dev clusters upgrade to whatever is sitting on main
. This can be overridden with the TAG
envvar as needed.
We upgrade our dev environment with the latest changes through our Makefile:
upgrade-dev:
okteto down -f api/okteto.yml
okteto down -f frontend/okteto.yml
helm --kube-context=${OKTETO_DEV_CONTEXT} upgrade --install ${OKTETO_RELEASE_NAME} chart/okteto-enterprise --namespace=okteto ...
kubectl --context=${OKTETO_DEV_CLUSTER} -n okteto rollout restart deployment/${OKTETO_RELEASE_NAME}-okteto-enterprise-frontend
kubectl --context=${OKTETO_DEV_CLUSTER} -n okteto rollout restart deployment/${OKTETO_RELEASE_NAME}-okteto-enterprise-api
Conclusion
Being able to test and reproduce a complex scenario with a local dev environment is extremely hard for a large application.
Speaking as an Okteto user, being able to plug my development workflow into an active system and know with greater confidence that everything is working as expected is an invaluable guarantee that Okteto provides.
Speaking as an Okteto engineer, it’s great to be building this disruptive technology while also taking advantage of it!