Issue
the specification of compute_instance which i tried to connect with :
resource "google_compute_address" "ip_address" {
name = "ipv4-address"
}
resource "google_compute_instance" "controller" {
name = "controller"
machine_type = "n1-standard-2"
zone = "us-central1-a"
boot_disk {
initialize_params {
image = "ubuntu-os-cloud/ubuntu-2204-lts"
}
}
# Pour attribuer une adresse IP dans la plage CIDR : 100.100.20.0/24
network_interface {
subnetwork = google_compute_subnetwork.subnet.name
network_ip = "10.100.20.3"
access_config {
nat_ip = google_compute_address.ip_address.address
}
}
metadata_startup_script = "./'${file("${path.module}/Config_Ansible.sh")}'"
connection {
type = "ssh"
user = "MSI" # Remplacez par votre nom d'utilisateur SSH
private_key = file("./key") # Remplacez par le chemin de votre clé privée SSH
host = google_compute_address.ip_address.address
}
provisioner "file" {
source = "../Ansible_playbooks/Inventory_create.sh"
destination = "/home/ubuntu/Inventory_create.sh"
}
tags = ["ansible"]
}
the content of the file "/key" from sshkey gcp service, i think this is the problem. Because the error :
Error: file provisioner error
│
│ with google_compute_instance.controller,
│ on Creation_VM_GCP.tf line 158, in resource "google_compute_instance" "controller":
│ 158: provisioner "file" {
│
│ Failed to read ssh private key: no key found
i tried to connect to this machine and send a file from local to this vm, using provisioner and connection but it failed.
Solution
Private and public ssh
keys should be generated on the local machine (here's an example: https://git-scm.com/book/en/v2/Git-on-the-Server-Generating-Your-SSH-Public-Key).
The private key stays on the local machine and you should make sure that no-one has access to it and that it is password-protected, while the public key should be copied to the Virtual Machine so that you can access the VM using private/public key.
If you use something like this
metadata = {
ssh-keys = "your-ssh-username:${file("local_path/to/your/public-key-file")}"
# alternatively:
# ssh-keys = "your-ssh-username:content_of_public_key"
}
and
connection {
type = "ssh"
user = "your-ssh-username"
private_key = file("local_path/to/your/private-key-file")
host = self.network_interface[0].access_config[0].nat_ip
}
Terraform will copy the public key to the VM in the file $USER/.ssh/authorized_keys
. You can also copy the public key manually to $USER/.ssh/authorized_keys
on the VM, just make sure that each key in authorized_keys
takes exactly one line!
Check ssh-keys example in: https://cloud.google.com/compute/docs/metadata/predefined-metadata-keys
See also: https://cloud.google.com/compute/docs/instances/access-overview#ssh-access
But also check the content of the local file "./'${file("${path.module}/Config_Ansible.sh")}'"
, maybe this takes care of uoploading the public key?
Edit: you can also have Terraform create a private/public key pair, see: How to create an SSH key in Terraform?
Answered By - user2314737 Answer Checked By - Gilberto Lyons (WPSolving Admin)