amazon web services - Enterprise private Docker registry best practices -


this question sprang mind preparing on rolling out our own private registry. enterprise best practice , why?

q1:

  1. run multiple registries 1 s3 storage backend? each registry have setting makes push dev, qa or prod (top level) folders in same s3 bucket.

  2. run 1 registry 1 s3 storage backend dev/qa/prod environments? since whole point of docker have image run anywhere same, provide different docker run parameters, since docker image same across env, run arguments pass different.

  3. run 1 registry , 1 s3 storage backend per env

q2:

what best practice promoting image dev way prod? tool sets involved. example have central gitlab our dockerfiles, when check in our new dockerfile, there hook triggers jenkins build image dockerfile , check in registry. way promote (unless chose option 2 earlier q1) images next level - qa, , prod?

q3:

if update 1 of base images, way make sure change propagates upstream other images in registry? example update customized base ubuntu dockerfile new stuff, , want other docker files use base image rebuilt , pushed registry change automatically propagated.

q4:

does play role in above if have different aws accounts, 1 dev, 1 qa, 1 prod, etc.

we chose option number two. i'll describe we're using, , though we're not big enterprise environment (a few hundred containers running @ given time), perhaps give ideas. stick single s3 backend/bucket (on single account) of our "types" of images, , namespace images appropriately. trick here our "dev" environments "production" environments. central our entire design , our raison d'etre using docker: have our development environment mirror production environment as possible. have several production machines on commodity dedicated hardware in few geographically-distinct datacenters, , run our images on top of those. using standard git workflow pushing code , changes. use bitbucket (and mirror our own self-hosted gitlab) , run tests on each push master shippable (big fan of shippable here!). coordinate custom piece of software listens webhooks on "head" server, builds/tags/commits , pushes docker image our private registry (at 1 point hosted on same "head" server). custom software webhooks simple custom server on each production machine, pulls new images private registry , zero-downtime-updates new container in place of old. use incredible , awesome nginx-proxy reverse proxy jason wilder on of our docker machines, makes process vastly easier without. if have particular requirements segregating dev/qa/prod images 1 another, suggest sharding backend little possible if keep having more possible points of failure. real strength of docker uniformity of environments can create. when "flip switch" container's task development qa production, change port number container listens on our "dev/qa" port "production" port number, , track these changes in our internal tracker. using multiple records in dns "load balance", , haven't needed scale using actual load-balancers (yet), use load-balancing feature of nginx-proxy image love much, since it's feature built-in already.

if need change our base image reason, make changes in environment (updates, whatever), , work from in our new dockerfile. these changes baked new image, gets pushed our private registry, propagates out production other "regular" code push. should note mirror all of our data on s3 (not our end-product docker images), in case tragic happens , need spin new "head" server (which uses docker of functions). fwiw, heck of price break us, being able move commodity dedicated hardware instead of ec2. luck!


Comments

Popular posts from this blog

apache - PHP Soap issue while content length is larger -

asynchronous - Python asyncio task got bad yield -

javascript - Complete OpenIDConnect auth when requesting via Ajax -