Issue
I am trying to deploy a bash server on Cloud run in order to easily trigger gcloud commands with parameters that would be passed with the POST request to the service. I take the inspiration mainly from here.
At the moment the bash server looks like this:
#!/usr/bin/env bash
PORT=${PORT-8080}
echo "Listening on port $PORT..."
while true
do
rm -f out
mkfifo out
trap "rm -f out" EXIT
echo "Waiting.."
cat out | nc -Nv -l 0.0.0.0 "${PORT}" > >( # parse the netcat output, to build the answer redirected to the pipe "out".
# A POST request is expected, so the request is read until the '}' ending the json payload.
# while
read -d } PAYLOAD;
# do
# The contents of the payload are extracted by stripping the headers of the request
# Then every entry of the json is exported :
# export KEY=VALUE
for s in $(echo $PAYLOAD} | \
sed '/./{H;$!d} ; x ; s/^[^{]*//g' | \
sed '/./{H;$!d} ; x ; s/[^}]*$//g' | \
jq -r "to_entries|map(\"\(.key)=\(.value|tostring)\")|.[]" );
do
export $s;
echo $s;
done
echo "Running the gcloud command..."
# gsutil mb -l $LOCATION gs://$BUCKET_NAME
printf "%s\n%s\n%s\n" "HTTP/1.1 202 Accepted" "Content-length: 0" "Connection: Keep- alive" > out
# done
)
continue
done
The Dockerfile for deployment looks like this:
FROM google/cloud-sdk:alpine
RUN apk add --upgrade jq netcat-openbsd coreutils \
&& apk add --no-cache --upgrade bash
COPY main.sh .
ENTRYPOINT ["./main.sh"]
(Cloud SDK image with netcat-openbsd -server-, jq -JSON processing part-, and bash in addition)
With this quite simple setup I can deploy the service and it listens to incoming requests. When receiving a POST request with a payload looking like
{"LOCATION": "MY_VALID_LOCATION", "BUCKET_NAME": "MY_VALID_BUCKET_NAME"}
the (here commented out) cloud SDK command runs correctly and creates a bucket with the specified name in the specified region, in the same project as the the Cloud Run service.
However, I receive a 502 error at the end of the process. (I'm expecting a 202 Accepted response).
The server seems to work correctly locally. However it seems that Cloud Run cannot receive the HTTP response.
How can I make sure the HTTP response is correctly transmitted back ?
Solution
Thanks to @guillaumeblaquiere suggestion I managed to obtain a solution to what I was ultimately trying to do: running bash scripts on Cloud Run.
Two important points were to be able to : 1) be able to access the payload of the incoming HTTP request, 2) have access to google cloud SDK.
To do so, the trick was to build a Docker image based on not only on the Google SDK base image, but also on the shell2http one that deals with the server aspects.
The Dockerfile therefore looks like this:
FROM google/cloud-sdk:alpine
RUN apk add --upgrade jq
COPY --from=msoap/shell2http /app/shell2http /shell2http
COPY main.sh .
ENTRYPOINT ["/shell2http","-cgi"]
CMD ["/","/main.sh"]
Thanks to the latter image, the HTTP processing is handled by a Go server and the incoming HTTP is piped into the stdin of the main.sh script.
The -cgi
option also allows different environment variables to be set, for instance the $HTTP_CONTENT_LENGTH variable that contains the length of the payload (therefore of the JSON containing the different parameters we want to extract and pass to further actions).
As such, with the following as the first line of our main.sh script:
read -n$HTTP_CONTENT_LENGTH PAYLOAD;
the PAYLOAD is set with the incoming JSON payload.
Using jq
it is then possible to do whatever we want.
All the code is gathered in this repository: https://github.com/cylldby/bash2run
In addition, this solution is used in this project in order to have a very simple solution to trigger a Google workflow from an eventarc trigger.
Big thanks again to @guillaume_blaquiere for the tip !
Answered By - Cylldby