2

I have a very similar question as this one.

I do have an Angular application that collects data which are then processed via a REST Api. I can happily dockerize both applications and they run fine locally, however, when I try deploy them to make them accessible from "everywhere" I can only reach the front end, but the connection to REST Api is not functional.

Inside my Angular app, I have a file baseurl.ts. That just contains:

export const baseURL = 'http://localhost:3000/';

I make the application production ready using:

ng build --prod

which creates the dist folder and then the following docker container (taken from here):

FROM node:alpine AS builder

ARG PROXY=http://myproxy
ARG NOPROXY=localhost,127.0.0.1

ENV http_proxy=${PROXY}
ENV https_proxy=${PROXY}
ENV NO_PROXY=${NOPROXY}
ENV no_proxy=${NOPROXY}

WORKDIR /app

COPY . .

RUN npm install && \
    npm run build

FROM nginx:alpine

COPY --from=builder /app/dist/* /usr/share/nginx/html/

I build the container using

docker build -t form_angular:v1

and run it using

docker run -d -p8088:80 form_angular:v1

The second Dockerfile for the REST Api looks like this:

FROM continuumio/miniconda3

ARG PROXY=http://myproxy
ARG NOPROXY=localhost,127.0.0.1

ENV http_proxy=${PROXY}
ENV https_proxy=${PROXY}
ENV NO_PROXY=${NOPROXY}
ENV no_proxy=${NOPROXY} 

COPY my_environment.yml my_environment.yml
SHELL ["/bin/bash", "-c"] 

RUN echo "Using proxy $PROXY" \
    && touch /etc/apt/apt.conf \
    && echo "Acquire::http::Proxy \"$PROXY\";" >> /etc/apt/apt.conf \
    && cat /etc/apt/apt.conf \
    && apt-get -q -y update \
    && DEBIAN_FRONTEND=noninteractive apt-get -q -y upgrade \
    && apt-get -q -y install \
       build-essential \        
    && apt-get -q clean \
    && rm -rf /var/lib/apt/lists/*

RUN ["conda", "env", "create", "-f", "my_environment.yml"]
COPY user_feedback.py user_feedback.py
CMD source activate my_environment; gunicorn -b 0.0.0.0:3000 user_feedback:app

Building:

docker build -t form_rest:latest .

Running:

docker run --name form_rest -d -p 3000:3000

As I said, that all works as expected when running on the localhost. How do I now make these two containers talk to each other for "global" deployment?

Cleb
  • 22,913
  • 18
  • 103
  • 140

2 Answers2

4

Your baseURL is hardcoded to localhost. For "global" deployment you would need to change the baseURL to point to the global endpoint of your REST api. That will require you to know the global endpoint and it would need to be static.

Another option would be to set baseURL to /api for prod and configure the angular nginx to proxy /api to your REST api. You would need to link the containers for that to work but wouldn't need to expose the public port on the REST api container, it will only be proxied through nginx.

I use the nginx proxy option for my projects and use docker-compose to handle all linking and inter-container communication stuff.

Example docker-compose.yml and nginx.conf files. This is taken from what I'm currently using, think it should work for you.

docker-compose.yml

version: '3.4'
services:
  nginx:
    container_name: nginx
    image: form_angular
    build:
      context: .
      dockerfile: <path to angular/nginx dockerfile>
    ports:
      - 8088:80
    networks:
      - my-network
  restapi:
    container_name: restapi
    image: form_rest
    build:
      context: .
      dockerfile: <path to rest dockerfile>
    networks:
      - my-network
networks:
  my-network:
    driver: bridge

nginx.conf:

events {
  worker_connections 1024;
}
http {
  upstream api {
    server restapi:3000;
  }
  server {
    server_name nginx;
    root /usr/share/nginx/html;
    index index.html;
    include /etc/nginx/mime.types;
    location /api/ {
      proxy_pass http://api;
      proxy_http_version 1.1;
      proxy_set_header Upgrade $http_upgrade;
      proxy_set_header Connection 'upgrade';
      proxy_set_header Host $host;
      proxy_set_header X-Real-IP $remote_addr;
      proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
      proxy_set_header X-NginX-Proxy true;
      proxy_cache_bypass $http_upgrade;
    }
    location /assets/ {
      access_log off;
      expires 1d;
    }
    location ~ \.(css|js|svg|ico)$ {
      access_log off;
      expires 1d;
    }
    location / {
      try_files $uri /index.html;
    }
  }
}
  • Hardcoding is something I would really like to avoid, as I would then have to create a new image everytime I want to deploy it somewhere else. The second option sounds great! Could you show how such a docker-compose file looks like for the case above?! – Cleb Feb 27 '19 at 15:20
  • Updated answer with examples – Finnur Eiríksson Feb 27 '19 at 15:38
  • Thanks a lot! I can test it only tomorrow; then I get back to you and upvote/accept. :) – Cleb Feb 27 '19 at 16:27
  • Few questions: if I use `dockerfile: folder/Dockerfile` will it then build the image on the fly with the name `form_angular` i.e. what is the `build` for as I already have an image? Secondly, where does this `nginx.conf` file go i.e. what do I have to do with it in this context? Thanks! – Cleb Feb 28 '19 at 08:13
  • If you don't have a pre-built image docker-compose will build it for you. You can also pass flags to docker-compose to have it manually build the image for you before starting. To answer your first question, yes, this should work. If you don't want docker-compose to build your image you can delete the build section. I edited service names in docker-compose.yml, server reference in line 6 in nginx.conf needs to match the service name you want to point to. You need to add a copy line to the nginx dockerfile that copies the config file to /etc/nginx/nginx.conf – Finnur Eiríksson Feb 28 '19 at 08:46
  • I like the idea to use nginx as proxy for the API – David Mar 01 '19 at 09:36
  • Still no luck, which is - I guess - due to internal proxy issues. I upvote for now as the answer is quite helpful and provides a lot of food for thought and then accept if I manage to make it work. Thanks a lot for your help so far! – Cleb Mar 01 '19 at 10:49
  • If you have more questions or need more help don't hesitate to ask. Glad I could help. – Finnur Eiríksson Mar 01 '19 at 16:04
0

When you use localhost in a container, it means the container itself, not the host running the container. So if you're pointing to "localhost" from a second container (the UI in your case), that new container will look at itself, and doesn't find the API.

One of the option to solve your problem is to make your containers reachable by name.

The easiest way to do it, in your case, is using docker-compose: eg:

version: '3'
  services:
    angular:
      image: "form_angular:v1"
      container_name: "form_angular"
      ports:
        - "8088:80"
      external_links:
        - restapi

    restapi:
      image: "form_rest:latest"
      container_name: "form_rest"
      ports:
        - "3000:3000"
      external_links:
        - angular

And with that, from the angular you can reach the restapi using the name (as a DNS) restapi, and angular from the restapi.

I suggest you to read more about docker-compose at https://docs.docker.com/compose/

It is very versatile and easy, and you can live with it for a long time, until you decide to build your own cloud ;)

David
  • 2,869
  • 1
  • 27
  • 34
  • Thanks a lot! I can test it only tomorrow, unfortunately; will get back to you and upvote/accept. – Cleb Feb 27 '19 at 16:28
  • 1
    Since this is a web application it's the browser viewing the webpage that makes the request to the rest api, not the container hosting the web app. In this case, localhost works because browser and both containers are running on same computer. Making the containers reachable by name wouldn't solve the problem of reaching the rest-api from a browser on a different computer unless the container name is globally reachable. – Finnur Eiríksson Feb 27 '19 at 16:30
  • Here I get `services.restapi.ports contains an invalid type, it should be an array`. I also get `Unsupported config option for services.angular: 'name'`. Any ideas? Also, would I have to change any code in the angular application, in this case `export const baseURL = 'http://localhost:3000/';` to something like export const baseURL = 'restapi';`? – Cleb Feb 28 '19 at 07:50
  • For the ports error, instead of -d "3000:3000" remove d so it will be - "3000:3000". name should be changed to container_name. baseURL would need to point to http://restapi:3000, but for any of this to work, you would need to browse the angular website from within the form_angular container – Finnur Eiríksson Feb 28 '19 at 08:50
  • @FinnurEiríksson When I try this and use `docker-compose up`, I get `Removing form_angular Recreating 2e8227f6fed2_form_angular ... done Recreating c84075454609_form_rest ... done Attaching to form_rest, form_angular form_rest | [2019-02-28 10:49:11 +0000] [9] [INFO] Starting gunicorn 19.9.0 form_rest | [2019-02-28 10:49:11 +0000] [9] [INFO] Listening at: http://0.0.0.0:3000 (9) form_rest | [2019-02-28 10:49:11 +0000] [9] [INFO] Using worker: sync form_rest | [2019-02-28 10:49:11 +0000] [12] [INFO] Booting worker with pid: 12` The frontend is available, rest api does not work. – Cleb Feb 28 '19 at 10:55
  • @FinnurEiríksson: Will now try your solution. – Cleb Feb 28 '19 at 10:56