Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Any example Vagrantfile to make 2 guests to communicate? #54

Open
zakiharis opened this issue Mar 15, 2024 · 4 comments
Open

Any example Vagrantfile to make 2 guests to communicate? #54

zakiharis opened this issue Mar 15, 2024 · 4 comments

Comments

@zakiharis
Copy link

I saw issue #40 still open, but issue #30 is closed and said that I can use extra_qemu_args to achieve this.

But when I provisioned 2 guests, I cant make them ping each other.

Vagrantfile

Vagrant.configure('2') do |config|
  config.vm.box = "generic/ubuntu1804"

  (10..11).each_with_index do |address, index|
    vm_index = index + 1

    config.vm.define "nomad#{vm_index}" do |cp|
      cp.vm.hostname = "nomad#{vm_index}"
      cp.vm.provider "qemu" do |qe|
        qe.arch = "x86_64"
        qe.machine = "q35"
        qe.cpu = "max"
        qe.memory = "2G"
        qe.ssh_port = "50#{vm_index}22"
        qe.net_device = "virtio-net-pci"
        qe.extra_netdev_args = "net=10.100.100.0/24,dhcpstart=10.100.100.#{address}"
      end
    end
  end
end

nomad1

1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
       valid_lft forever preferred_lft forever
2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc fq_codel state UP group default qlen 1000
    link/ether 52:54:00:12:34:56 brd ff:ff:ff:ff:ff:ff
    inet 10.100.100.10/24 brd 10.100.100.255 scope global dynamic eth0
       valid_lft 85790sec preferred_lft 85790sec
    inet6 fec0::5054:ff:fe12:3456/64 scope site dynamic mngtmpaddr noprefixroute
       valid_lft 86139sec preferred_lft 14139sec
    inet6 fe80::5054:ff:fe12:3456/64 scope link
       valid_lft forever preferred_lft forever

nomad2

1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
       valid_lft forever preferred_lft forever
2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc fq_codel state UP group default qlen 1000
    link/ether 52:54:00:12:34:56 brd ff:ff:ff:ff:ff:ff
    inet 10.100.100.11/24 brd 10.100.100.255 scope global dynamic eth0
       valid_lft 85786sec preferred_lft 85786sec
    inet6 fec0::5054:ff:fe12:3456/64 scope site dynamic mngtmpaddr noprefixroute
       valid_lft 86363sec preferred_lft 14363sec
    inet6 fe80::5054:ff:fe12:3456/64 scope link
       valid_lft forever preferred_lft forever
PING 10.100.100.11 (10.100.100.11) 56(84) bytes of data.
From 10.100.100.10 icmp_seq=1 Destination Host Unreachable
From 10.100.100.10 icmp_seq=2 Destination Host Unreachable
From 10.100.100.10 icmp_seq=3 Destination Host Unreachable
From 10.100.100.10 icmp_seq=4 Destination Host Unreachable
From 10.100.100.10 icmp_seq=5 Destination Host Unreachable
From 10.100.100.10 icmp_seq=6 Destination Host Unreachable

^C--- 10.100.100.11 ping statistics ---
7 packets transmitted, 0 received, +6 errors, 100% packet loss, time 6149ms
pipe 4

I'm running on m1. Kindly advice

@wdcapl
Copy link
Contributor

wdcapl commented Mar 16, 2024

Have you tried communicating over host port forward? Just portforward and use the gateway ip address for both machines on both sides

@MilanFun
Copy link

MilanFun commented Apr 8, 2024

Have you tried communicating over host port forward? Just portforward and use the gateway ip address for both machines on both sides

Could you provide example of config? (Default usage of vagrant port_forwarder ignored for qemu provider)

@bombardun
Copy link

I have the same question about multiple configuration

@gbhat618
Copy link

gbhat618 commented Dec 3, 2024

What I have observed is, when you address is 10.100.100.0/24, the DHCP address 10.100.100.2 points to the Host machine.

I didn't check with the above 10.100.100.0/24 network,

but here is what I use successfully,

  • vm1 (this is an nfs server): net=192.168.51.0/24,dhcpstart=192.168.51.10
     config.vm.network "forwarded_port", guest: 2049, host: 2049
     config.vm.network "forwarded_port", guest: 111, host: 111
     config.vm.network "forwarded_port", guest: 9024, host:9024
    
    (I ensured some nfs configurations to make sure the nfs ports are static for my nfs server)
  • vm2: net=192.168.51.0/24,dhcpstart=192.168.51.11
    Then from vm2, I can mount the NFS over ip 192.168.51.2
    sudo mount -t nfs 192.168.51.2:/my_nfs_export /tmp/test -vvv
    

(I picked the network 192.168.51.0/24 from #36 (comment))

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

5 participants