Skip to content

Commit 75fe6a6

Browse files
committed
Updates for docs sync, add OVA
Signed-off-by: Pedro Ielpi <[email protected]>
1 parent a8f1508 commit 75fe6a6

7 files changed

Lines changed: 191 additions & 6 deletions

File tree

Lines changed: 4 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,4 @@
1+
---
2+
title: "Ova Management"
3+
weight: "5"
4+
---
Lines changed: 151 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,151 @@
1+
---
2+
title: "OVA Import"
3+
description:
4+
categories:
5+
pageintoc: ""
6+
tags:
7+
weight: "2"
8+
---
9+
10+
<a id="import-ova"></a>
11+
12+
<!--# OVA Import -->
13+
14+
## Requirements
15+
16+
The [OneSwap](https://github.com/OpenNebula/one-swap) VM import tool will assume that the provided OVA has been exported from a VMware environment, user must make sure that the provided OVA is compatible with VMware environments. Other sources are currently not supported (i.e. Xen or VirtualBox).
17+
18+
When converting an OVA you will need enough space both in the `/tmp` folder and in the destination DS where the disk images are going to be imported.
19+
20+
### Windows VirtIO drivers
21+
22+
Before converting Windows VMs, download the required VirtIO drivers for the Windows VM distribution. These drivers can be downloaded from the [virtio-win repository](https://github.com/virtio-win/virtio-win-pkg-scripts/blob/master/README.md).
23+
24+
{{< alert title="Note" color="success" >}}
25+
The converted VM will reboot several times after instantiation in order to install and configure the VirtIO drivers.{{< /alert >}}
26+
27+
## Usage
28+
29+
It is possible to specify the target Datastore and VNET for the OVA to be imported. Refer to `man oneswap` for the complete documentation of the oneswap command. Available options for the `oneswap import` command are:
30+
31+
| Parameter | Description |
32+
|----------------------------------------|-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
33+
| `--ova file.ova` | `/path/to/ovf/files/` | Path to the OVA file or folder containing the OVF files. |
34+
| `--datastore name` | `ID` | Name/ID of the Datastore to store the new Image. Accepts one or more Datastores (i.e. `--datastore 101,102`). When more than one Datastore is provided, each disk will be allocated in a different one. |
35+
| `--network name` | `ID` | Name/ID of the VNET to assign in the VM Template. Accepts one or more VNETs (i.e. `--network 0,1`). When more than one VNET is provided, each interface from the OVA will be assigned to each VNET. |
36+
| `--virtio /path/to/virtio.iso` | Path to the ISO file with the VirtIO drivers for the Windows version. |
37+
38+
If multiple network interfaces are detected when importing an OVA and only one VNET ID or not enough VNET IDs are provided for all interfaces, using `--network ID`, the last one will be used for the rest of the interfaces after the last coincidence. The same will apply to Datastores using the `--datastore ID` option.
39+
40+
### Example on importing OVF
41+
42+
Example command on how to import an OVF using the Datastore ID 101 and VNET ID 1:
43+
44+
```default
45+
$ oneswap import --ova /ovas/vm-alma9/ --datastore 101 --network 1
46+
Running: virt-v2v -v --machine-readable -i ova /ovas/vm-alma9/ -o local -os /tmp/vm-alma9/conversions/ -of qcow2 --root=first
47+
48+
Setting up the source: -i ova /home/onepoc/ovas/vm-alma9/
49+
50+
(...)
51+
52+
$ onetemplate list
53+
ID USER GROUP NAME REGTIME
54+
63 onepoc oneadmin vm-alma9 03/24 16:34:34
55+
56+
$ onetemplate instantiate 63
57+
VM ID: 103
58+
```
59+
60+
### Example on importing OVA with multiple DS and VNET
61+
62+
The source OVA has two disks and two NICs, as it can be seen from the .ovf file:
63+
64+
```default
65+
<DiskSection>
66+
<Info>List of the virtual disks</Info>
67+
<Disk ovf:capacityAllocationUnits="byte" ovf:format="http://www.vmware.com/interfaces/specifications/vmdk.html#streamOptimized" ovf:diskId="vmdisk1" ovf:capacity="8589934592" ovf:fileRef="file1"/>
68+
<Disk ovf:capacityAllocationUnits="byte" ovf:format="http://www.vmware.com/interfaces/specifications/vmdk.html#streamOptimized" ovf:diskId="vmdisk2" ovf:capacity="2147483648" ovf:fileRef="file2"/>
69+
</DiskSection>
70+
<NetworkSection>
71+
<Info>The list of logical networks</Info>
72+
<Network ovf:name="VM Network 0">
73+
<Description>The VM Network 0 network</Description>
74+
</Network>
75+
<Network ovf:name="VM Network 1">
76+
<Description>The VM Network 1 network</Description>
77+
</Network>
78+
</NetworkSection>
79+
```
80+
81+
Example command on how to import an OVA with two disks and two network interfaces, importing each disk to a different Datastore and assigning each NIC to a different VNET:
82+
83+
```default
84+
$ oneswap import --ova /home/onepoc/ovas/ubuntu2404.ova --datastore 1,101 --network 1,0
85+
Running: virt-v2v -v --machine-readable -i ova /home/onepoc/ovas/ubuntu2404.ova -o local -os /tmp/ubuntu2404/conversions/ -of qcow2 --root=first
86+
87+
Setting up the source: -i ova /home/onepoc/ovas/ubuntu2404.ova
88+
89+
(...)
90+
91+
$ onetemplate list
92+
ID USER GROUP NAME REGTIME
93+
101 onepoc oneadmin ubuntu2404 04/10 12:55:03
94+
```
95+
96+
The OS Image is imported in Datastore 1 and the Datablock Image is imported in Datastore 101, and the VM Template has one NIC using VNET 1 and a second NIC using VNET 0.
97+
98+
```default
99+
$ oneimage list
100+
ID USER GROUP NAME DATASTORE SIZE TYPE PER STAT RVMS
101+
151 onepoc oneadmin ubuntu2404_1 NFS image 2G DB No rdy 0
102+
150 onepoc oneadmin ubuntu2404_0 default 8G OS No rdy 0
103+
104+
$ onetemplate show 101 | grep NIC -A 1
105+
NIC=[
106+
NETWORK_ID="1" ]
107+
NIC=[
108+
NETWORK_ID="0" ]
109+
```
110+
111+
## Context injection
112+
113+
OneSwap will detect the guest operating system and try to inject the context packages available from the [one-apps](https://github.com/opennebula/one-apps) repository.
114+
115+
Context injection will be performed following these steps:
116+
117+
1. Install context using package manager for the distro. However, this step may fail and trigger the execution of the fallback context installation command:
118+
119+
```default
120+
Inspecting disk...Done (3.92s)
121+
Injecting one-context...Running: virt-customize -q -a /tmp/vm-alma9/conversions/vm-alma9-sda --run-command 'subscription-manager repos --enable codeready-builder-for-rhel-9-$(arch)-rpms' --run-command 'yum -y install https://dl.fedoraproject.org/pub/epel/epel-release-latest-9.noarch.rpm' --copy-in /var/lib/one/context//one-context-6.10.0-3.el9.noarch.rpm:/tmp --install /tmp/one-context-6.10.0-3.el9.noarch.rpm --delete /tmp/one-context-6.10.0-3.el9.noarch.rpm --run-command 'systemctl enable NetworkManager.service || exit 0'
122+
Failed (6.31s)
123+
```
124+
125+
1. Context will be installed using a fallback method of copying the context packages into the guest OS and installing it on the first boot in case the previous step fails. Sometimes it will be necessary to boot twice in order for this method to work.
126+
127+
```default
128+
Running: virt-customize -q -a /tmp/vm-alma9/conversions/vm-alma9-sda --firstboot-install epel-release --copy-in /var/lib/one/context//one-context-6.10.0-3.el9.noarch.rpm:/tmp --firstboot-install /tmp/one-context-6.10.0-3.el9.noarch.rpm --run-command 'systemctl enable network.service || exit 0'
129+
Success (42.24s)
130+
Context will install on first boot, you may need to boot it twice.
131+
```
132+
133+
{{< alert title="Note" color="success" >}}
134+
If context injection does not work after importing, it is also possible to install one-context **before exporting the OVA** from VMware using the packages available in the one-apps repository and uninstalling VMware Tools. In this case it is important to be aware that the one-context service will get rid of any manual network configurations done to the guest OS and the VM won’t be able to get the network configuration from VMware anymore.{{< /alert >}}
135+
136+
## Additional virt-v2v options
137+
138+
The following parameters can be tuned for virt-v2v, defaults will be applied if no options are provided.
139+
140+
| Parameter | Description |
141+
|---------------------------------------|-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
142+
| `--v2v-path /path/to/ovf/files/` | Path to the OVA file or folder containing the OVF files. Default: `virt-v2v`. |
143+
| `--work-dir \| -w /path/to/work/dir` | Directory where disk conversion takes place, will make subdir for each VM. Default: `/tmp`. |
144+
| `--format \| -f name [ qcow2 \| raw ]` | Disk format `[ qcow2 \| raw ]`. Default: `qcow2`. |
145+
| `--virtio /path/to/iso` | Full path of the win-virtio ISO file. Required to inject VirtIO drivers to Windows Guests. |
146+
| `--win-qemu-ga /path/to/iso` | Install QEMU Guest Agent to a Windows guest. |
147+
| `--qemu-ga` | Install qemu-guest-agent package to a Linux guest, useful with `–custom` or `–fallback`. |
148+
| `--delete-after` | Removes the leftover conversion directory in the working directory which contains the converted VM disks and descriptor files. |
149+
| `--vddk /path/to/vddk/` | Full path to the VDDK library, required for VDDK-based transfer. |
150+
| `--virt-tools /path/to/virt-tools` | Path to the directory containing `rhsrvany.exe`, defaults to `/usr/local/share/virt-tools`. See [https://github.com/rwmjones/rhsrvany](https://github.com/rwmjones/rhsrvany). |
151+
| `--root option` | Choose the root filesystem to be converted. Can be `ask`, `single`, `first` or `/dev/sdX`. |
Lines changed: 17 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,17 @@
1+
---
2+
title: "Overview"
3+
4+
description:
5+
categories:
6+
pageintoc: ""
7+
tags:
8+
weight: "1"
9+
---
10+
11+
<a id="ova-management-overview"></a>
12+
13+
<!--# Overview -->
14+
15+
OpenNebula supports importing OVAs that have been exported from vCenter / ESXi environments, generating the necessary VM Template and Images.
16+
17+
It is possible to import .ova files or a folder containing the OVF files (VMDK disk files and manifest file in .ovf format). The import tool will inject context packages in the target Images, automatically detecting the guest operating system.

content/7.0/product/cloud_clusters_infrastructure_configuration/storage_system_configuration/netapp.md renamed to content/7.0/product/cloud_clusters_infrastructure_configuration/storage_system_configuration/netapp_ds.md

File renamed without changes.

content/7.0/quick_start/try_opennebula/opennebula_evaluation_environment/provisioning_edge_cluster.md

Lines changed: 4 additions & 4 deletions
Original file line numberDiff line numberDiff line change
@@ -61,7 +61,7 @@ Creating an AWS account is covered in the previous tutorial in this Quick Start
6161

6262
As a first step, if you don’t already have one, create an account in AWS. AWS publishes a complete guide: [How do I create and activate a new AWS account?](https://aws.amazon.com/premiumsupport/knowledge-center/create-and-activate-aws-account/)
6363

64-
After you have created your account, you’ll need to obtain the `access_key` and `secret_key` of a user with the necessary permissions to manage instances. The relevant AWS guide is [Configure tool authentication with AWS](https://docs.aws.amazon.com/powershell/latest/userguide/pstools-appendix-sign-up.html).
64+
After you have created your account, you’ll need to obtain the `access_key` and `secret_key` of a user with the necessary permissions to manage instances. The relevant AWS guide is [Configure tool authentication with AWS](https://docs.aws.amazon.com/IAM/latest/UserGuide/access-keys-admin-managed.html).
6565

6666
Next, you need to choose the region where you want to deploy the new resources. You can check the available regions in AWS’s documentation: [Regions, Availability Zones, and Local Zones](https://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/Concepts.RegionsAndAvailabilityZones.html).
6767

@@ -80,7 +80,7 @@ To log in, point your browser to the OneProvision address:
8080
https://<FRONT-END IP>:2616/fireedge/provision
8181
```
8282

83-
In the log in screen, enter the credentials for user `oneadmin`.
83+
In the login screen, enter the credentials for user `oneadmin`.
8484

8585
Sunstone will display the **OneProvision** screen:
8686

@@ -104,7 +104,7 @@ Sunstone displays the **Provider template** screen, showing the **Provision type
104104

105105
![image_provider_create_step1](/images/fireedge_cpi_provider_create1.png)
106106

107-
Click **Next**. In the next screen you can enter a description for your provider:
107+
Click **Next**. In the next screen, you can enter a description for your provider:
108108

109109
![image_provider_create_step2](/images/fireedge_cpi_provider_create2.png)
110110

@@ -181,7 +181,7 @@ To see a running log of the provision, click **Log**:
181181

182182
Provisioning will take a few minutes. When it’s finished, the log will display the message `Provision successfully created`, followed by the provision’s ID.
183183

184-
At this point the Edge Cluster has been created, and is up and running. In the next step, we’ll verify that all of the specified resources for the provision (the host, datastore, network, and the cluster itself) have been correctly created and registered with OpenNebula.
184+
At this point, the Edge Cluster has been created and is up and running. In the next step, we’ll verify that all of the specified resources for the provision (the host, datastore, network, and the cluster itself) have been correctly created and registered with OpenNebula.
185185

186186
## Step 4: Validate the New Infrastructure
187187

content/7.0/quick_start/try_opennebula/opennebula_evaluation_environment/running_kubernetes_clusters.md

Lines changed: 14 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -199,7 +199,7 @@ To verify the deployment using the command line, log in to the Front-end node as
199199
```default
200200
[oneadmin@FN]$ oneflow list
201201
ID USER GROUP NAME STARTTIME STAT
202-
3 oneadmin oneadmin Service OneKE 1.29 04/29 08:18:17 RUNNING
202+
3 oneadmin oneadmin Service OneKE 1.29 04/29 08:18:17 RUNNING
203203
```
204204

205205
To verify that the VMs for the cluster were correctly deployed, you can use the `onevm list` command. In the example below, the command lists the VMs for the cluster (and, in this case, the WordPress VM deployed in the previous tutorial):
@@ -495,3 +495,16 @@ In this case you can manually instruct the VMs to report `READY` to the OneGate
495495
ID USER GROUP NAME STARTTIME STAT
496496
3 oneadmin oneadmin OneKE 1.29 08/30 12:35:21 RUNNING
497497
```
498+
499+
#### One or more VMs is Ready, but Unreachable
500+
501+
In a similar situation as above when `onevm list` shows all VMs running, but the service is still in `DEPLOYING` state and the VM is not reachable through SSH (e.g. to run the `onegate vm update` command).
502+
503+
In this case, we can try to scale down and up the role of the problematic VM from [Sunstone]({{% relref "fireedge_sunstone.md" %}}), the Front-end UI:
504+
505+
> 1. In Sunstone, go to **Services**, then select the **OneKE** Service.
506+
> 2. In the **Roles** tab, choose the problematic VM’s role (e.g. `worker`).
507+
> 3. Scale the role to `0`.
508+
> 4. Wait until VM shuts down and the scaling and cooldown period of the service finishes.
509+
> 5. Scale the role to `1`.
510+
> 6. Verify if the problem is solved and `oneflow list` reports the `RUNNING` state.

content/7.0/quick_start/understand_opennebula/opennebula_concepts/opennebula_overview.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -105,7 +105,7 @@ The main components of an OpenNebula installation are listed below.
105105

106106
* **OpenNebula Daemon** (`oned`): The OpenNebula Daemon is the core service of the cloud management platform. It manages the cluster nodes, virtual networks and storages, groups, users and their virtual machines; and provides the XML-RPC API to other services and end-users.
107107
* **Database**: OpenNebula persists the state of the cloud to a user-selected SQL database. This key component should be monitored and tuned for best performance, following best practices for the particular database product.
108-
* **Scheduler**: The OpenNebula Scheduler is responsible for planning deployment of pending Virtual Machines on available hypervisor nodes. It’s a dedicated daemon (`mm_sched`) installed alongside the OpenNebula Daemon, but can be deployed independently on a different machine.
108+
* **Scheduler**: The OpenNebula Scheduler framework is a modular system for optimal resource allocation. It is started automatically with the OpenNebula Daemon, and can apply different scheduling algorithms to allocate hosts, storage and virtual networks.
109109
* **Edge Cluster Provision**: This component creates fully functional OpenNebula Clusters on public cloud or edge providers. The Provision module integrates Edge Clusters into your OpenNebula cloud by utilizing these three core technologies: Terraform, Ansible and the OpenNebula Services.
110110
* **Monitoring Subsystem**: The monitoring subsystem is implemented as a dedicated daemon (`onemonitord`) launched by the OpenNebula Daemon. It gathers information relevant to the Hosts and the Virtual Machines, such as Host status, basic performance indicators, Virtual Machine status and capacity consumption.
111111
* **OneFlow**: The OneFlow service orchestrates multi-VM services as single entities, defining dependencies and auto-scaling policies for the application components. It interacts with the OpenNebula Daemon to manage the Virtual Machines (starts, stops), and can be controlled via the Sunstone GUI or over the CLI. It’s a dedicated daemon installed by default as part of the Single Front-end Installation, but can be deployed independently on a different machine.

0 commit comments

Comments
 (0)