본문으로 바로가기
반응형

RHEL8부터 기본적으로 NetworkManager를 통해 네트워크를 설정합니다.

 

nmcli / nmtui를 통해서 네트워크 설정을 하는데 nmstate 패키지를 통해서 앤서블을 활용하여 RHEL 시스템의 네트워크를 설정할 수 있습니다.

 

 

1. 필요 패키지 설치

[root@hk-tb-kvmhost ~]#yum install -y nmstate ansible rhel-system-roles

 

 

2. yml 파일 작성 (본딩 + IP설정)

[root@hk-tb-kvmhost ~]#vi create-bond.yml
---
- name: Configure a network bond that uses two Ethernet ports
  hosts: localhost
  become: true
  tasks:
  - include_role:
      name: rhel-system-roles.network

    vars:
      network_connections:
        # Define the bond profile
        - name: bond0
          type: bond
          interface_name: bond0
          ip:
            address:
              - "100.100.100.100/24"
            gateway4: 100.100.100.1
            dns:
              - 8.8.8.8
            dns_search:
              - example.com
          bond:
            mode: active-backup
          state: up

        # Add an Ethernet profile to the bond
        - name: eno1
          interface_name: eno1
          type: ethernet
          controller: bond0
          state: up

        # Add a second Ethernet profile to the bond
        - name: eno2
          interface_name: eno2
          type: ethernet
          controller: bond0
          state: up

 

 

3. playbook 수행

[root@hk-tb-kvmhost ~]#ansible-playbook -u root ./create-bond.yml
[WARNING]: provided hosts list is empty, only localhost is available. Note that the implicit localhost
does not match 'all'

PLAY [Configure a network bond that uses two Ethernet ports] *********************************************

TASK [Gathering Facts] ***********************************************************************************
ok: [localhost]

TASK [include_role : rhel-system-roles.network] **********************************************************

TASK [rhel-system-roles.network : Check which services are running] **************************************
ok: [localhost]

TASK [rhel-system-roles.network : Check which packages are installed] ************************************
ok: [localhost]

TASK [rhel-system-roles.network : Print network provider] ************************************************
ok: [localhost] => {
    "msg": "Using network provider: nm"
}

TASK [rhel-system-roles.network : Install packages] ******************************************************
skipping: [localhost]

TASK [rhel-system-roles.network : Restart NetworkManager due to wireless or team interfaces] *************
skipping: [localhost]

TASK [rhel-system-roles.network : Enable and start NetworkManager] ***************************************
ok: [localhost]


< 중략 >

 

 

4. 설정 적용 확인

[root@hk-tb-kvmhost ~]#ip link show bond0
62: bond0: <BROADCAST,MULTICAST,MASTER,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP mode DEFAULT group default qlen 1000
    link/ether e4:43:4b:b5:2e:10 brd ff:ff:ff:ff:ff:ff
[root@hk-tb-kvmhost ~]#cat /proc/net/bonding/bond0
Ethernet Channel Bonding Driver: v4.18.0-305.19.1.el8_4.x86_64

Bonding Mode: fault-tolerance (active-backup)
Primary Slave: None
Currently Active Slave: None
MII Status: up
MII Polling Interval (ms): 100
Up Delay (ms): 0
Down Delay (ms): 0
Peer Notification Delay (ms): 0

Slave Interface: eno1
MII Status: down
Speed: Unknown
Duplex: Unknown
Link Failure Count: 0
Permanent HW addr: e4:43:4b:b5:2e:10
Slave queue ID: 0

Slave Interface: eno2
MII Status: down
Speed: Unknown
Duplex: Unknown
Link Failure Count: 0
Permanent HW addr: e4:43:4b:b5:2e:12
Slave queue ID: 0


[root@hk-tb-kvmhost ~]#ifconfig bond0
bond0: flags=5187<UP,BROADCAST,RUNNING,MASTER,MULTICAST>  mtu 1500
        inet 100.100.100.100  netmask 255.255.255.0  broadcast 100.100.100.255
        inet6 fe80::3cf:8961:1fcf:ffcf  prefixlen 64  scopeid 0x20<link>
        ether e4:43:4b:b5:2e:10  txqueuelen 1000  (Ethernet)
        RX packets 0  bytes 0 (0.0 B)
        RX errors 0  dropped 0  overruns 0  frame 0
        TX packets 16  bytes 1104 (1.0 KiB)
        TX errors 0  dropped 2 overruns 0  carrier 0  collisions 0
728x90