openstack之train版部署

环境

操作系统:CentOS 7.6
控制节点和计算节点需要两张网卡,一张做管理网,另一张做提供者网,选择自助服务网络方式部署,需要能够连接到Internet,用于安装在线包。块存储节点只需要一张网卡,需要连接Internet。
控制节点:eth0 10.1.102.191 eth1 10.1.102.195,8C16G,一块100G系统盘,一块50G数据盘。
计算节点:eth0 10.1.102.192 eth1 10.1.102.196,8C16G,一块100G系统盘,一块50G数据盘。
块存储节点:eth0 10.1.102.197,4C4G,一块100G系统盘,两块50G数据盘。

体系结构

  1. openstack服务之间的关系

    在这里插入图片描述

  2. 体系结构

    在这里插入图片描述

  3. 硬件要求

    在这里插入图片描述

  4. 网络结构

    在这里插入图片描述

    在这里插入图片描述

环境准备

1. 配置网卡,解析,防火墙,selinux,yum源

相信各位都是大佬,网卡配置在此省略,关闭防火墙和selinux,配置hosts,配置yum源,禁用epel源(在此我配置的是阿里云的yum源),三个节点均做相同配置。
cat /etc/hosts
10.1.102.195 controller
10.1.102.196 compute
10.1.102.197 cinder
配置完成后,验证各个节点是否能够连接到Internet,各个节点之间是否互通。

2. 配置时间同步

控制节点:

yum install chrony

编辑/etc/chrony.conf,这里用的是阿里云的时间服务器。
server ntp1.aliyun.com iburst
server ntp2.aliyun.com iburst
server ntp3.aliyun.com iburst
server ntp4.aliyun.com iburst
allow 10.1.100.0/22

systemctl enable chronyd.service
systemctl start chronyd.service

其他节点:

yum install chrony

编辑/etc/chrony.conf,注释掉自带的时间同步。
server controller iburst

systemctl enable chronyd.service
systemctl start chronyd.service

验证:
在控制节点上执行chronyc sources

[root@controller ~]# chronyc sources
210 Number of sources = 2
MS Name/IP address         Stratum Poll Reach LastRx Last sample               
===============================================================================
^* 120.25.115.20                 2  10   377   708  +4276us[+4323us] +/-   17ms
^+ 203.107.6.88                  2  10   377   749  -5086us[-5038us] +/-   31ms
[root@compute ~]# chronyc sources
210 Number of sources = 4
MS Name/IP address         Stratum Poll Reach LastRx Last sample               
===============================================================================
^+ ntp6.flashdance.cx            2  10   375   722   +203us[ +203us] +/-  236ms
^+ stratum2-1.ntp.led01.ru.>     2  10   377   646    -18ms[  -18ms] +/-  137ms
^* tick.ntp.infomaniak.ch        1  10   373   872   +559us[ +642us] +/-   96ms
^+ de-user.deepinid.deepin.>     3  10   377   311  +2568us[+2568us] +/-  106ms

3. 安装openstack包

在CentOS上,extras存储库提供了启用OpenStack存储库的RPM。

yum install centos-release-openstack-train  //train版本包
yum upgrade //只更新软件,不更新内核
注意:在更新时setup-2.8.71-11.el7.noarch包无法更新,需要下载rpm包手动安装后再更新。
yum install python-openstackclient //安装客户端
yum install openstack-selinux  //安装selinux自动管理,这里selinux是关闭的

4. 安装数据库

在控制节点上操作

yum install mariadb mariadb-server python2-PyMySQL

创建和编辑/etc/my.cnf.d/openstack.cnf文件
[mysqld]
bind-address = 10.1.102.195
default-storage-engine = innodb
innodb_file_per_table = on
max_connections = 4096
collation-server = utf8_general_ci
character-set-server = utf8

systemctl enable mariadb.service
systemctl start mariadb.service
mysql_secure_installation //初始化数据库

5. 安装消息队列

在控制节点上操作

 yum install rabbitmq-server
systemctl enable rabbitmq-server.service
systemctl start rabbitmq-server.service
rabbitmqctl add_user openstack 123qweQWE,./  //添加openstack用户
rabbitmqctl set_permissions openstack ".*" ".*" ".*" //赋予权限

6. 安装memcached

在控制节点上操作

yum install memcached python-memcached

编辑/etc/sysconfig/memcached
OPTIONS="-l 0.0.0.0,::1"

systemctl enable memcached.service
systemctl start memcached.service

7. 安装etcd键值对数据库

在控制节点上操作

yum install etcd

编辑/etc/etcd/etcd.conf文件
#[Member]
ETCD_DATA_DIR="/var/lib/etcd/default.etcd"
ETCD_LISTEN_PEER_URLS=“http://10.0.0.11:2380”
ETCD_LISTEN_CLIENT_URLS=“http://10.0.0.11:2379”
ETCD_NAME=“controller”
#[Clustering]
ETCD_INITIAL_ADVERTISE_PEER_URLS=“http://10.0.0.11:2380”
ETCD_ADVERTISE_CLIENT_URLS=“http://10.0.0.11:2379”
ETCD_INITIAL_CLUSTER=“controller=http://10.0.0.11:2380”
ETCD_INITIAL_CLUSTER_TOKEN=“etcd-cluster-01”
ETCD_INITIAL_CLUSTER_STATE=“new”

systemctl enable etcd
systemctl start etcd
注意:启动时,遇到了启动失败,配置相同,卸载重装后成功启动。

安装keystone服务

在控制节点上操作

mysql -u root -p123qweQWE,./
MariaDB [(none)]> CREATE DATABASE keystone;
MariaDB [(none)]> GRANT ALL PRIVILEGES ON keystone.* TO 'keystone'@'localhost' \
IDENTIFIED BY '123qweQWE,./';
MariaDB [(none)]> GRANT ALL PRIVILEGES ON keystone.* TO 'keystone'@'%' \
IDENTIFIED BY '123qweQWE,./';
yum install openstack-keystone httpd mod_wsgi

编辑/etc/keystone/keystone.conf

[database]
# ...
connection = mysql+pymysql://keystone:123qweQWE,./@controller/keystone
[token]
# ...
provider = fernet
su -s /bin/sh -c "keystone-manage db_sync" keystone //写入数据库
keystone-manage fernet_setup --keystone-user keystone --keystone-group keystone
keystone-manage credential_setup --keystone-user keystone --keystone-group keystone
keystone-manage bootstrap --bootstrap-password 123qweQWE,./ \
  --bootstrap-admin-url http://controller:5000/v3/ \
  --bootstrap-internal-url http://controller:5000/v3/ \
  --bootstrap-public-url http://controller:5000/v3/ \
  --bootstrap-region-id RegionOne

编辑/etc/httpd/conf/httpd.conf
ServerName controller

ln -s /usr/share/keystone/wsgi-keystone.conf /etc/httpd/conf.d/
systemctl enable httpd.service
systemctl start httpd.service

配置环境变量

$ export OS_USERNAME=admin
$ export OS_PASSWORD=ADMIN_PASS
$ export OS_PROJECT_NAME=admin
$ export OS_USER_DOMAIN_NAME=Default
$ export OS_PROJECT_DOMAIN_NAME=Default
$ export OS_AUTH_URL=http://controller:5000/v3
$ export OS_IDENTITY_API_VERSION=3
openstack domain create --description "An Example Domain" example
openstack project create --domain default --description "Service Project" service //创建service项目
openstack project create --domain default --description "Demo Project" myproject //创建myproject项目
openstack user create --domain default  --password-prompt myuser //创建myuser用户
openstack role create myrole //创建myrole角色
openstack role add --project myproject --user myuser myrole //添加myrole角色到myprosject项目和myuser用户

验证

unset OS_AUTH_URL OS_PASSWORD  //取消临时设置环境变量
openstack --os-auth-url http://controller:5000/v3 \
  --os-project-domain-name Default --os-user-domain-name Default \
  --os-project-name admin --os-username admin token issue //验证admin用户令牌
  openstack --os-auth-url http://controller:5000/v3 \
  --os-project-domain-name Default --os-user-domain-name Default \
  --os-project-name myproject --os-username myuser token issue //验证myuser用户令牌

创建admin和myuser项目和用户脚本
编辑admin-openrc

export OS_PROJECT_DOMAIN_NAME=Default
export OS_USER_DOMAIN_NAME=Default
export OS_PROJECT_NAME=admin
export OS_USERNAME=admin
export OS_PASSWORD=123qweQWE,./
export OS_AUTH_URL=http://controller:5000/v3
export OS_IDENTITY_API_VERSION=3
export OS_IMAGE_API_VERSION=2

编辑demo-openrc

export OS_PROJECT_DOMAIN_NAME=Default
export OS_USER_DOMAIN_NAME=Default
export OS_PROJECT_NAME=myproject
export OS_USERNAME=myuser
export OS_PASSWORD=123qweQWE,./
export OS_AUTH_URL=http://controller:5000/v3
export OS_IDENTITY_API_VERSION=3
export OS_IMAGE_API_VERSION=2

使用脚本

. admin-openrc
openstack token issue //请求身份验证令牌

安装glance服务

在控制节点上操作

mysql -u root -p123qweQWE,./
MariaDB [(none)]> CREATE DATABASE glance;
MariaDB [(none)]> GRANT ALL PRIVILEGES ON glance.* TO 'glance'@'localhost' \
  IDENTIFIED BY '123qweQWE,./';
MariaDB [(none)]> GRANT ALL PRIVILEGES ON glance.* TO 'glance'@'%' \
  IDENTIFIED BY '123qweQWE,./';
. admin-openrc
openstack user create --domain default --password-prompt glance //创建glance用户
openstack role add --project service --user glance admin //将admin角色添加到glance用户和service项目
openstack service create --name glance --description "OpenStack Image" image //添加glance服务实体
openstack endpoint create --region RegionOne image public http://controller:9292 //创建API端点
openstack endpoint create --region RegionOne image internal http://controller:9292
openstack endpoint create --region RegionOne image admin http://controller:9292
yum install openstack-glance

编辑/etc/glance/glance-api.conf

[database]
# ...
connection = mysql+pymysql://glance:123qweQWE,./@controller/glance
[keystone_authtoken]
# ...
www_authenticate_uri  = http://controller:5000
auth_url = http://controller:5000
memcached_servers = controller:11211
auth_type = password
project_domain_name = Default
user_domain_name = Default
project_name = service
username = glance
password = GLANCE_PASS

[paste_deploy]
# ...
flavor = keystone
[glance_store]
# ...
stores = file,http
default_store = file
filesystem_store_datadir = /var/lib/glance/images/
su -s /bin/sh -c "glance-manage db_sync" glance //写入数据库
systemctl enable openstack-glance-api.service
systemctl start openstack-glance-api.service

验证
下载cirros镜像 ,并导入glance中

. admin-openrc
wget http://download.cirros-cloud.net/0.4.0/cirros-0.4.0-x86_64-disk.img
glance image-create --name "cirros" \
  --file cirros-0.4.0-x86_64-disk.img \
  --disk-format qcow2 --container-format bare \
  --visibility public
glance image-list

安装placement服务

在控制节点中操作

mysql -u root -p123qweQWE,./
MariaDB [(none)]> CREATE DATABASE placement;
MariaDB [(none)]> GRANT ALL PRIVILEGES ON placement.* TO 'placement'@'localhost' \
  IDENTIFIED BY '123qweQWE,./';
MariaDB [(none)]> GRANT ALL PRIVILEGES ON placement.* TO 'placement'@'%' \
  IDENTIFIED BY '123qweQWE,./';
. admin-openrc
openstack user create --domain default --password-prompt placement //创建一个placement用户
openstack role add --project service --user placement admin //将placement用户添加到admin角色和service项目中
openstack service create --name placement --description "Placement API" placement //创建API条目
openstack endpoint create --region RegionOne placement public http://controller:8778  //创建API端点
openstack endpoint create --region RegionOne placement internal http://controller:8778
openstack endpoint create --region RegionOne placement admin http://controller:8778
  yum install openstack-placement-api

编辑/etc/placement/placement.conf

  [placement_database]
# ...
connection = mysql+pymysql://placement:123qweQWE,./@controller/placement
[api]
# ...
auth_strategy = keystone

[keystone_authtoken]
# ...
auth_url = http://controller:5000/v3
memcached_servers = controller:11211
auth_type = password
project_domain_name = Default
user_domain_name = Default
project_name = service
username = placement
password = 123qweQWE,./
su -s /bin/sh -c "placement-manage db sync" placement //写入数据库
 systemctl restart httpd

验证

. admin-openrc
placement-status upgrade check
pip install osc-placement //安装安装OSC-placement插件
openstack --os-placement-api-version 1.2 resource class list --sort-column name
openstack --os-placement-api-version 1.6 trait list --sort-column name

安装nova服务

在控制节点上操作

mysql -u root -p123qweQWE,./
MariaDB [(none)]> CREATE DATABASE nova_api;
MariaDB [(none)]> CREATE DATABASE nova;
MariaDB [(none)]> CREATE DATABASE nova_cell0;
MariaDB [(none)]> GRANT ALL PRIVILEGES ON nova_api.* TO 'nova'@'localhost' \
  IDENTIFIED BY '123qweQWE,./';
MariaDB [(none)]> GRANT ALL PRIVILEGES ON nova_api.* TO 'nova'@'%' \
  IDENTIFIED BY '123qweQWE,./';
MariaDB [(none)]> GRANT ALL PRIVILEGES ON nova.* TO 'nova'@'localhost' \
  IDENTIFIED BY '123qweQWE,./';
MariaDB [(none)]> GRANT ALL PRIVILEGES ON nova.* TO 'nova'@'%' \
  IDENTIFIED BY '123qweQWE,./';
MariaDB [(none)]> GRANT ALL PRIVILEGES ON nova_cell0.* TO 'nova'@'localhost' \
  IDENTIFIED BY '123qweQWE,./';
MariaDB [(none)]> GRANT ALL PRIVILEGES ON nova_cell0.* TO 'nova'@'%' \
  IDENTIFIED BY '123qweQWE,./';
. admin-openrc
openstack user create --domain default --password-prompt nova //创建nova用户
openstack role add --project service --user nova admin //将admin角色添加到nova用户
openstack service create --name nova --description "OpenStack Compute" compute //创建nova服务实体
openstack endpoint create --region RegionOne compute public http://controller:8774/v2.1  //创建compute API端点
openstack endpoint create --region RegionOne compute internal http://controller:8774/v2.1
openstack endpoint create --region RegionOne  compute admin http://controller:8774/v2.1
yum install openstack-nova-api openstack-nova-conductor openstack-nova-novncproxy openstack-nova-scheduler

编辑/etc/nova/nova.conf

[DEFAULT]
# ...
enabled_apis = osapi_compute,metadata
transport_url = rabbit://openstack:123qweQWE,./@controller:5672/
my_ip = 10.0.0.11
use_neutron = true
firewall_driver = nova.virt.firewall.NoopFirewallDriver
[api_database]
# ...
connection = mysql+pymysql://nova:123qweQWE,./@controller/nova_api

[database]
# ...
connection = mysql+pymysql://nova:123qweQWE,./@controller/nova
[api]
# ...
auth_strategy = keystone

[keystone_authtoken]
# ...
www_authenticate_uri = http://controller:5000/
auth_url = http://controller:5000/
memcached_servers = controller:11211
auth_type = password
project_domain_name = Default
user_domain_name = Default
project_name = service
username = nova
password = 123qweQWE,./
[vnc]
enabled = true
# ...
server_listen = $my_ip
server_proxyclient_address = $my_ip
[glance]
# ...
api_servers = http://controller:9292
[oslo_concurrency]
# ...
lock_path = /var/lib/nova/tmp
[placement]
# ...
region_name = RegionOne
project_domain_name = Default
project_name = service
auth_type = password
user_domain_name = Default
auth_url = http://controller:5000/v3
username = placement
password = 123qweQWE,./
su -s /bin/sh -c "nova-manage api_db sync" nova
su -s /bin/sh -c "nova-manage cell_v2 map_cell0" nova //注册cell0数据库
su -s /bin/sh -c "nova-manage cell_v2 create_cell --name=cell1 --verbose" nova
su -s /bin/sh -c "nova-manage db sync" nova
su -s /bin/sh -c "nova-manage cell_v2 list_cells" nova
systemctl enable openstack-nova-api.service openstack-nova-scheduler.service openstack-nova-conductor.service openstack-nova-novncproxy.service
systemctl start openstack-nova-api.service openstack-nova-scheduler.service openstack-nova-conductor.service openstack-nova-novncproxy.service

在计算节点上操作

yum install openstack-nova-compute

编辑/etc/nova/nova.conf

[DEFAULT]
# ...
enabled_apis = osapi_compute,metadata
transport_url = rabbit://openstack:123qweQWE,./@controller
my_ip = 10.1.102.196
use_neutron = true
firewall_driver = nova.virt.firewall.NoopFirewallDriver
[api]
# ...
auth_strategy = keystone

[keystone_authtoken]
# ...
www_authenticate_uri = http://controller:5000/
auth_url = http://controller:5000/
memcached_servers = controller:11211
auth_type = password
project_domain_name = Default
user_domain_name = Default
project_name = service
username = nova
password = 123qweQWE,./
[vnc]
# ...
enabled = true
server_listen = 0.0.0.0
server_proxyclient_address = $my_ip
novncproxy_base_url = http://controller:6080/vnc_auto.html
[glance]
# ...
api_servers = http://controller:9292
[oslo_concurrency]
# ...
lock_path = /var/lib/nova/tmp
[placement]
# ...
region_name = RegionOne
project_domain_name = Default
project_name = service
auth_type = password
user_domain_name = Default
auth_url = http://controller:5000/v3
username = placement
password = 123qweQWE,./
(egrep -c '(vmx|svm)' /proc/cpuinfo //如果此命令返回值为one or greater,您的计算节点支持硬件加速,这通常不需要额外的配置。如果此命令返回值为zero,您的计算节点不支持硬件加速,您必须配置libvirt使用QEMU而不是KVM)
[libvirt]
# ...
virt_type = qemu
systemctl enable libvirtd.service openstack-nova-compute.service
systemctl start libvirtd.service openstack-nova-compute.service
. admin-openrc
openstack compute service list --service nova-compute 
su -s /bin/sh -c "nova-manage cell_v2 discover_hosts --verbose" nova //发现计算主机

添加新的计算节点时,必须运行nova-manage cell_v2 discover_hosts在控制器节点上注册这些新的计算节点。或者,您可以在/etc/nova/nova.conf
[scheduler]
discover_hosts_in_cells_interval = 300
验证

. admin-openrc
openstack compute service list
openstack catalog list
openstack image list
nova-status upgrade check

安装neutron服务

在控制节点上

mysql -u root -p123qweQWE,./
MariaDB [(none)] CREATE DATABASE neutron;
MariaDB [(none)]> GRANT ALL PRIVILEGES ON neutron.* TO 'neutron'@'localhost' \
  IDENTIFIED BY '123qweQWE,./';
MariaDB [(none)]> GRANT ALL PRIVILEGES ON neutron.* TO 'neutron'@'%' \
  IDENTIFIED BY '123qweQWE,./';
. admin-openrc
openstack user create --domain default --password-prompt neutron
openstack role add --project service --user neutron admin
openstack service create --name neutron --description "OpenStack Networking" network
openstack endpoint create --region RegionOne network public http://controller:9696
openstack endpoint create --region RegionOne network internal http://controller:9696
openstack endpoint create --region RegionOne network admin http://controller:9696
yum install openstack-neutron openstack-neutron-ml2 openstack-neutron-linuxbridge ebtables

编辑/etc/neutron/neutron.conf

[DEFAULT]
# ...
core_plugin = ml2
service_plugins = router
allow_overlapping_ips = true
transport_url = rabbit://openstack:123qweQWE,./@controller
auth_strategy = keystone
notify_nova_on_port_status_changes = true
notify_nova_on_port_data_changes = true
[database]
# ...
connection = mysql+pymysql://neutron:123qweQWE,./@controller/neutron
[keystone_authtoken]
# ...
www_authenticate_uri = http://controller:5000
auth_url = http://controller:5000
memcached_servers = controller:11211
auth_type = password
project_domain_name = default
user_domain_name = default
project_name = service
username = neutron
password = 123qweQWE,./
[nova]
# ...
auth_url = http://controller:5000
auth_type = password
project_domain_name = default
user_domain_name = default
region_name = RegionOne
project_name = service
username = nova
password = 123qweQWE,./
[oslo_concurrency]
# ...
lock_path = /var/lib/neutron/tmp

编辑/etc/neutron/plugins/ml2/ml2_conf.ini

[ml2]
type_drivers = flat,vlan,vxla
tenant_network_types = vxlan
mechanism_drivers = linuxbridge,l2population
extension_drivers = port_security
[ml2_type_flat]
# ...
flat_networks = provider
[ml2_type_vxlan]
# ...
vni_ranges = 1:1000
[securitygroup]
# ...
enable_ipset = true

编辑/etc/neutron/plugins/ml2/linuxbridge_agent.ini

[linux_bridge]
physical_interface_mappings = provider:eth0
[vxlan]
enable_vxlan = true
local_ip = 10.1.102.195
l2_population = true
[securitygroup]
# ...
enable_security_group = true
firewall_driver = neutron.agent.linux.iptables_firewall.IptablesFirewallDriver

编辑/etc/neutron/l3_agent.ini

[DEFAULT]
# ...
interface_driver = linuxbridge

编辑/etc/neutron/dhcp_agent.ini

[DEFAULT]
# ...
interface_driver = linuxbridge
dhcp_driver = neutron.agent.linux.dhcp.Dnsmasq
enable_isolated_metadata = true

编辑/etc/neutron/metadata_agent.ini

[DEFAULT]
# ...
nova_metadata_host = controller
metadata_proxy_shared_secret = 123qweQWE,./

编辑/etc/nova/nova.conf

[neutron]
# ...
auth_url = http://controller:5000
auth_type = password
project_domain_name = default
user_domain_name = default
region_name = RegionOne
project_name = service
username = neutron
password = 123qweQWE,./
service_metadata_proxy = true
metadata_proxy_shared_secret = 123qweQWE,./
ln -s /etc/neutron/plugins/ml2/ml2_conf.ini /etc/neutron/plugin.ini
su -s /bin/sh -c "neutron-db-manage --config-file /etc/neutron/neutron.conf  --config-file /etc/neutron/plugins/ml2/ml2_conf.ini upgrade head" neutron
systemctl restart openstack-nova-api.service
systemctl enable neutron-server.service neutron-linuxbridge-agent.service neutron-dhcp-agent.service neutron-metadata-agent.service
systemctl start neutron-server.service neutron-linuxbridge-agent.service neutron-dhcp-agent.service neutron-metadata-agent.service systemctl enable neutron-l3-agent.service
systemctl start neutron-l3-agent.service
vim /etc/security/limits.conf
*               soft    nofile            1024000
*               hard    nofile            1024000

在计算节点上操作

yum install openstack-neutron-linuxbridge ebtables ipset

编辑/etc/neutron/neutron.conf

[DEFAULT]
# ...
transport_url = rabbit://openstack:123qweQWE,./@controller
auth_strategy = keystone
[keystone_authtoken]
# ...
www_authenticate_uri = http://controller:5000
auth_url = http://controller:5000
memcached_servers = controller:11211
auth_type = password
project_domain_name = default
user_domain_name = default
project_name = service
username = neutron
password = 123qweQWE,./
[oslo_concurrency]
# ...
lock_path = /var/lib/neutron/tmp

编辑/etc/neutron/plugins/ml2/linuxbridge_agent.ini

[linux_bridge]
physical_interface_mappings = provider:eth0
[vxlan]
enable_vxlan = true
local_ip = 10.1.102.196
l2_population = true
[securitygroup]
# ...
enable_security_group = true
firewall_driver = neutron.agent.linux.iptables_firewall.IptablesFirewallDriver
vim /etc/security/limits.conf
*               soft    nofile            1024000
*               hard    nofile            1024000

编辑/etc/nova/nova.conf

[neutron]
# ...
auth_url = http://controller:5000
auth_type = password
project_domain_name = default
user_domain_name = default
region_name = RegionOne
project_name = service
username = neutron
password = 123qweQWE,./
systemctl restart openstack-nova-compute.service
systemctl enable neutron-linuxbridge-agent.service
systemctl start neutron-linuxbridge-agent.service

在控制节点上验证

. admin-openrc
openstack extension list --network
openstack network agent list

安装horizon服务

在控制节点上操作
要求:
Python 2.7、3.6或3.7
Django 1.11、2.0和2.2
Django 2.0和2.2支持在Train版本中处于试验阶段

yum install openstack-dashboard

编辑/etc/openstack-dashboard/local_settings

#配置仪表盘在controller节点上使用OpenStack服务
OPENSTACK_HOST = "controller"
#允许主机访问仪表板,接受所有主机,不安全不应在生产中使用
ALLOWED_HOSTS = ['*']
#ALLOWED_HOSTS = ['one.example.com', 'two.example.com']
#配置memcached会话存储服务
SESSION_ENGINE = 'django.contrib.sessions.backends.cache'
CACHES = {
    'default': {
         'BACKEND': 'django.core.cache.backends.memcached.MemcachedCache',
         'LOCATION': 'controller:11211',
    }
}
#启用身份API版本3
OPENSTACK_KEYSTONE_URL = "http://%s:5000/v3" % OPENSTACK_HOST
#启用对域的支持
OPENSTACK_KEYSTONE_MULTIDOMAIN_SUPPORT = True
#配置API版本
OPENSTACK_API_VERSIONS = {
    "identity": 3,
    "image": 2,
    "volume": 3,
}
#配置Default为通过仪表板创建的用户的默认域
OPENSTACK_KEYSTONE_DEFAULT_DOMAIN = "Default"
#配置user为通过仪表板创建的用户的默认角色
OPENSTACK_KEYSTONE_DEFAULT_ROLE = "user"
#如果选择网络选项1,请禁用对第3层网络服务的支持,如果选择网络选项2,则可以打开
OPENSTACK_NEUTRON_NETWORK = {
	#自动分配的网络
    'enable_auto_allocated_network': False,
	#Neutron分布式虚拟路由器(DVR)
    'enable_distributed_router': False,
	#FIP拓扑检查
    'enable_fip_topology_check': False,
	#高可用路由器模式
    'enable_ha_router': False,
	#下面三个已过时,不用过多了解,官方文档配置中是关闭的
	'enable_lb': False,
    'enable_firewall': False,
    'enable_vpn': False,
	#ipv6网络
    'enable_ipv6': True,
	#Neutron配额功能
    'enable_quotas': True,
	#rbac政策
    'enable_rbac_policy': True,
	#路由器的菜单和浮动IP功能,如果Neutron部署有三层功能的支持可以打开
    'enable_router': True,
	#默认的DNS名称服务器
    'default_dns_nameservers': [],
	#网络支持的提供者类型,在创建网络时,该列表中的网络类型可供选择
    'supported_provider_types': ['*'],
	#使用与提供网络ID范围,仅涉及到VLAN,GRE,和VXLAN网络类型
    'segmentation_id_range': {},
	#使用与提供网络类型
    'extra_provider_types': {},
	#支持的vnic类型,用于与端口绑定扩展
    #'supported_vnic_types': ['*'],
	#物理网络
    #'physical_networks': [],
}
#配置时区为亚洲上海
TIME_ZONE = "Asia/Shanghai"

重建apache的dashboard配置文件

cd /usr/share/openstack-dashboard
python manage.py make_web_conf --apache > /etc/httpd/conf.d/openstack-dashboard.conf
ln -s /etc/openstack-dashboard /usr/share/openstack-dashboard/openstack_dashboard/conf
systemctl restart httpd.service memcached.service
systemctl enable httpd.service memcached.service

验证
使用web浏览器访问仪表板http://controller/dashboard
域: default
用户名: admin
密码: 123qweQWE,./

安装cinder服务

在控制节点上操作

mysql -u root -p123qweQWE,./
MariaDB [(none)]> CREATE DATABASE cinder;
MariaDB [(none)]> GRANT ALL PRIVILEGES ON cinder.* TO 'cinder'@'localhost' \
  IDENTIFIED BY '123qweQWE,./';
MariaDB [(none)]> GRANT ALL PRIVILEGES ON cinder.* TO 'cinder'@'%' \
  IDENTIFIED BY '123qweQWE,./';
. admin-openrc
openstack user create --domain default --password-prompt cinder
openstack role add --project service --user cinder admin
openstack service create --name cinderv2  --description "OpenStack Block Storage" volumev2
openstack service create --name cinderv3 --description "OpenStack Block Storage" volumev3
openstack endpoint create --region RegionOne volumev2 public http://controller:8776/v2/%\(project_id\)s
openstack endpoint create --region RegionOne  volumev2 internal http://controller:8776/v2/%\(project_id\)s
openstack endpoint create --region RegionOne  volumev2 admin http://controller:8776/v2/%\(project_id\)s
openstack endpoint create --region RegionOne volumev3 public http://controller:8776/v3/%\(project_id\)s
openstack endpoint create --region RegionOne volumev3 internal http://controller:8776/v3/%\(project_id\)s
openstack endpoint create --region RegionOne  volumev3 admin http://controller:8776/v3/%\(project_id\)s
yum install openstack-cinder
  注意:在安装时会出现nfs安装失败的信息,需要手动下载apm包安装

编辑/etc/cinder/cinder.conf

[DEFAULT]
# ...
transport_url = rabbit://openstack:123qweQWE,./@controller
auth_strategy = keystone
my_ip = 10.1.102.195
[database]
# ...
connection = mysql+pymysql://cinder:123qweQWE,./@controller/cinder
[keystone_authtoken]
# ...
www_authenticate_uri = http://controller:5000
auth_url = http://controller:5000
memcached_servers = controller:11211
auth_type = password
project_domain_name = default
user_domain_name = default
project_name = service
username = cinder
password = 123qweQWE,./
[oslo_concurrency]
# ...
lock_path = /var/lib/cinder/tmp
su -s /bin/sh -c "cinder-manage db sync" cinder

编辑/etc/nova/nova.conf

[cinder]
os_region_name = RegionOne
systemctl restart openstack-nova-api.service
systemctl enable openstack-cinder-api.service openstack-cinder-scheduler.service
systemctl start openstack-cinder-api.service openstack-cinder-scheduler.service

在块存储节点上操作

yum install lvm2 device-mapper-persistent-data
systemctl enable lvm2-lvmetad.service
systemctl start lvm2-lvmetad.service
pvcreate /dev/sdb
vgcreate cinder-volumes /dev/sdb

编辑/etc/lvm/lvm.conf

devices {
...
filter = [ "a/sdb/", "r/.*/"]
yum install openstack-cinder targetcli python-keystone

编辑/etc/cinder/cinder.conf

[DEFAULT]
# ...
transport_url = rabbit://openstack:123qweQWE,./@controller
auth_strategy = keystone
my_ip = 10.1.102.197
enabled_backends = lvm
glance_api_servers = http://controller:9292
[database]
# ...
connection = mysql+pymysql://cinder:123qweQWE,./@controller/cinder
[keystone_authtoken]
# ...
www_authenticate_uri = http://controller:5000
auth_url = http://controller:5000
memcached_servers = controller:11211
auth_type = password
project_domain_name = default
user_domain_name = default
project_name = service
username = cinder
password = 123qweQWE,./
[lvm]
volume_driver = cinder.volume.drivers.lvm.LVMVolumeDriver
volume_group = cinder-volumes
target_protocol = iscsi
target_helper = lioadm
[oslo_concurrency]
# ...
lock_path = /var/lib/cinder/tmp
systemctl enable openstack-cinder-volume.service target.service
systemctl start openstack-cinder-volume.service target.service

在控制节点上验证

. admin-openrc
openstack volume service list

安装swift服务(可选)

在控制节点上操作

. admin-openrc
openstack user create --domain default --password-prompt swift
openstack role add --project service --user swift admin
openstack service create --name swift --description "OpenStack Object Storage" object-store
openstack endpoint create --region RegionOne object-store public http://controller:8080/v1/AUTH_%\(project_id\)s
openstack endpoint create --region RegionOne object-store internal http://controller:8080/v1/AUTH_%\(project_id\)s
openstack endpoint create --region RegionOne object-store admin http://controller:8080/v1
yum install openstack-swift-proxy python-swiftclient python-keystoneclient python-keystonemiddleware memcached
curl -o /etc/swift/proxy-server.conf https://opendev.org/openstack/swift/raw/branch/master/etc/proxy-server.conf-sample  //从对象存储源存储库获取代理服务配置文件

编辑/etc/swift/proxy-server.conf

[DEFAULT]
...
bind_port = 8080
user = swift
swift_dir = /etc/swift
[pipeline:main]
pipeline = catch_errors gatekeeper healthcheck proxy-logging cache container_sync bulk ratelimit authtoken keystoneauth container-quotas account-quotas slo dlo versioned_writes proxy-logging proxy-server
[app:proxy-server]
use = egg:swift#proxy
...
account_autocreate = True
[filter:keystoneauth]
use = egg:swift#keystoneauth
...
operator_roles = admin,user
[filter:authtoken]
paste.filter_factory = keystonemiddleware.auth_token:filter_factory
...
www_authenticate_uri = http://controller:5000
auth_url = http://controller:5000
memcached_servers = controller:11211
auth_type = password
project_domain_id = default
user_domain_id = default
project_name = service
username = swift
password = 123qweQWE,./
delay_auth_decision = True
[filter:cache]
use = egg:swift#memcache
...
memcache_servers = controller:11211

在swift存储节点上操作

yum install xfsprogs rsync
mkfs.xfs /dev/sdb
mkfs.xfs /dev/sdc
mkdir -p /srv/node/sdb
mkdir -p /srv/node/sdc
blkid //查看uuid

编辑/etc/fstab

UUID="<UUID-from-output-above>" /srv/node/sdb xfs noatime 0 2
UUID="<UUID-from-output-above>" /srv/node/sdc xfs noatime 0 2
mount /srv/node/sdb
mount /srv/node/sdc

创建或编辑/etc/rsyncd.conf

uid = swift
gid = swift
log file = /var/log/rsyncd.log
pid file = /var/run/rsyncd.pid
address = 10.1.102.198
[account]
max connections = 2
path = /srv/node/
read only = False
lock file = /var/lock/account.lock
[container]
max connections = 2
path = /srv/node/
read only = False
lock file = /var/lock/container.lock
[object]
max connections = 2
path = /srv/node/
read only = False
lock file = /var/lock/object.lock
systemctl enable rsyncd.service
systemctl start rsyncd.service
yum install openstack-swift-account openstack-swift-container openstack-swift-object
curl -o /etc/swift/account-server.conf https://opendev.org/openstack/swift/raw/branch/master/etc/account-server.conf-sample
curl -o /etc/swift/container-server.conf https://opendev.org/openstack/swift/raw/branch/master/etc/container-server.conf-sample
curl -o /etc/swift/object-server.conf https://opendev.org/openstack/swift/raw/branch/master/etc/object-server.conf-sample

编辑/etc/swift/account-server.conf

[DEFAULT]
...
bind_ip = 10.1.102.198
bind_port = 6202
user = swift
swift_dir = /etc/swift
devices = /srv/node
mount_check = True
[pipeline:main]
pipeline = healthcheck recon account-server
[filter:recon]
use = egg:swift#recon
...
recon_cache_path = /var/cache/swift

编辑/etc/swift/container-server.conf

[DEFAULT]
...
bind_ip = 10.1.102.198
bind_port = 6201
user = swift
swift_dir = /etc/swift
devices = /srv/node
mount_check = True
[pipeline:main]
pipeline = healthcheck recon container-server
[filter:recon]
use = egg:swift#recon
...
recon_cache_path = /var/cache/swift

编辑/etc/swift/object-server.conf

[DEFAULT]
...
bind_ip = 10.1.102.198
bind_port = 6200
user = swift
swift_dir = /etc/swift
devices = /srv/node
mount_check = True
[pipeline:main]
pipeline = healthcheck recon object-server
[filter:recon]
use = egg:swift#recon
...
recon_cache_path = /var/cache/swift
recon_lock_path = /var/lock
chown -R swift:swift /srv/node
mkdir -p /var/cache/swift
chown -R root:swift /var/cache/swift
chmod -R 775 /var/cache/swift
firewall-cmd --permanent --add-port=6200/tcp
firewall-cmd --permanent --add-port=6201/tcp
firewall-cmd --permanent --add-port=6202/tcp

在控制节点上创建分发初始环
创建账户环

swift-ring-builder account.builder create 10 3 1
swift-ring-builder account.builder add --region 1 --zone 1 --ip 10.1.102.198 --port 6202 --device sdb --weight 100 //重复此命令将对象存储节点的所以设备添加进来
swift-ring-builder account.builder //核实环内容
swift-ring-builder account.builder rebalance //重新平衡

创建容器环

swift-ring-builder container.builder create 10 3 1
swift-ring-builder container.builder add  --region 1 --zone 1 --ip 10.1.102.198 --port 6201 --device sdb --weight 100 //重复此命令将对象存储节点的所以设备添加进来
swift-ring-builder container.builder
swift-ring-builder container.builder rebalance

创建对象环

swift-ring-builder object.builder create 10 3 1
swift-ring-builder object.builder add --region 1 --zone 1 --ip 10.1.102.198 --port 6200 --device sdb --weight 100  //重复此命令将对象存储节点的所以设备添加进来
swift-ring-builder object.builder
swift-ring-builder object.builder rebalance

复制account.ring.gz, container.ring.gz,和object.ring.gz文件到/etc/swift每个存储节点和运行代理服务的任何其他节点上的目录。
在控制节点上

curl -o /etc/swift/swift.conf https://opendev.org/openstack/swift/raw/branch/master/etc/swift.conf-sample

编辑/etc/swift/swift.conf

[swift-hash]
...
swift_hash_path_suffix = HASH_PATH_SUFFIX
swift_hash_path_prefix = HASH_PATH_PREFIX
[storage-policy:0]
...
name = Policy-0
default = yes

复制swift.conf文件到/etc/swift每个存储节点和运行代理服务的任何其他节点上的目录。在所有节点上,确保配置目录的正确所有权。

chown -R root:swift /etc/swift

在控制器节点和运行代理服务的任何其他节点上,启动对象存储代理服务(包括其依赖项),并将它们配置为在系统启动时启动。

systemctl enable openstack-swift-proxy.service memcached.service
systemctl start openstack-swift-proxy.service memcached.service

在存储节点上,启动对象存储服务,并将它们配置为在系统启动时启动。

systemctl enable openstack-swift-account.service openstack-swift-account-auditor.service openstack-swift-account-reaper.service openstack-swift-account-replicator.service
systemctl start openstack-swift-account.service openstack-swift-account-auditor.service openstack-swift-account-reaper.service openstack-swift-account-replicator.service
systemctl enable openstack-swift-container.service openstack-swift-container-auditor.service openstack-swift-container-replicator.service openstack-swift-container-updater.service
systemctl start openstack-swift-container.service openstack-swift-container-auditor.service openstack-swift-container-replicator.service openstack-swift-container-updater.service
systemctl enable openstack-swift-object.service openstack-swift-object-auditor.service openstack-swift-object-replicator.service openstack-swift-object-updater.service
systemctl start openstack-swift-object.service openstack-swift-object-auditor.service openstack-swift-object-replicator.service openstack-swift-object-updater.service

在控制节点上验证

 chcon -R system_u:object_r:swift_data_t:s0 /srv/node
. demo-openrc
swift stat
openstack container create container1
openstack object create container1 FILE
openstack object list container1
openstack object save container1 FILE

在块存储节点上安装备份服务(可选)

yum install openstack-cinder

编辑/etc/cinder/cinder.conf

[DEFAULT]
# ...
backup_driver = cinder.backup.drivers.swift.SwiftBackupDriver
backup_swift_url = SWIFT_URL  //swift的API端点中可以找URL
openstack catalog show object-store
systemctl enable openstack-cinder-backup.service
systemctl start openstack-cinder-backup.service

创建虚拟网络

在控制节点上创建提供者网络

. admin-openrc
openstack network create  --share --external --provider-physical-network provider --provider-network-type flat provider //创建提供者网络
openstack subnet create --network provider --allocation-pool start=10.1.102.1200,end=10.1.102.250 --dns-nameserver 114.114.114.114 --gateway 10.1.103.254 --subnet-range 10.1.100.0/22 provider //创建提供者网络的子网

在控制节点上创建自助服务网络

. demo-openrc
openstack network create selfservice //创建私有网络
openstack subnet create --network selfservice --dns-nameserver 114.114.114.114 --gateway 172.16.1.1 --subnet-range 172.16.1.0/24 selfservice  //创建私有网络的子网
. demo-openrc
openstack router create router //创建路由器
openstack router add subnet router selfservice //绑定私有网络
openstack router set router --external-gateway provider //绑定提供者网络
. admin-openrc
ip netns
openstack port list --router router

从控制节点和其他节点能够ping通路由器上的IP。
创建规格

openstack flavor create --id 0 --vcpus 1 --ram 64 --disk 1 m1.nano
. demo-openrc
ssh-keygen -q -N ""  //生成密钥对
openstack keypair create --public-key ~/.ssh/id_rsa.pub mykey //添加公钥
openstack keypair list //验证密钥对

添加安全组

openstack security group rule create --proto icmp default //默认安全组中添加icmp规则
openstack security group rule create --proto tcp --dst-port 22 default //添加ssh访问规则

启动实例

在提供者网络上启动一个实例

. demo-openrc
openstack flavor list
openstack image list
openstack network list
openstack security group list
openstack server create --flavor m1.nano --image cirros --nic net-id=PROVIDER_NET_ID --security-group default --key-name mykey provider-instance //启动实例,PROVIDER_NET_ID表示提供者网络id
openstack server list
openstack console url show provider-instance //使用虚拟控制台访问实例,验证网络的连通性,是否可以远程连接

在自助服务网络上启动一个实例

. demo-openrc
openstack flavor list
openstack image list
openstack network list
openstack security group list
openstack server create --flavor m1.nano --image cirros  --nic net-id=SELFSERVICE_NET_ID --security-group default --key-name mykey selfservice-instance
openstack server list
openstack console url show selfservice-instance
openstack floating ip create provider //创建一个浮动IP
openstack server add floating ip selfservice-instance 10.1.102.201//绑定到实例上
openstack server list

创建卷绑定到实例

. demo-openrc
openstack volume create --size 1 volume1 //创建一个卷
openstack volume list
openstack server add volume INSTANCE_NAME VOLUME_NAME  //将卷附加到实例

连接实例,查看块存储设备fdisk -l

完成

原文地址:https://blog.csdn.net/weixin_43757555/article/details/113913682

版权声明:本文内容由互联网用户自发贡献,该文观点与技术仅代表作者本人。本站仅提供信息存储空间服务,不拥有所有权,不承担相关法律责任。如发现本站有涉嫌侵权/违法违规的内容, 请发送邮件至 dio@foxmail.com 举报,一经查实,本站将立刻删除。

相关推荐


软件简介:蓝湖辅助工具,减少移动端开发中控件属性的复制和粘贴.待开发的功能:1.支持自动生成约束2.开发设置页面3.做一个浏览器插件,支持不需要下载整个工程,可即时操作当前蓝湖浏览页面4.支持Flutter语言模板生成5.支持更多平台,如Sketch等6.支持用户自定义语言模板
现实生活中,我们听到的声音都是时间连续的,我们称为这种信号叫模拟信号。模拟信号需要进行数字化以后才能在计算机中使用。目前我们在计算机上进行音频播放都需要依赖于音频文件。那么音频文件如何生成的呢?音频文件的生成过程是将声音信息采样、量化和编码产生的数字信号的过程,我们人耳所能听到的声音频率范围为(20Hz~20KHz),因此音频文件格式的最大带宽是20KHZ。根据奈奎斯特的理论,音频文件的采样率一般在40~50KHZ之间。奈奎斯特采样定律,又称香农采样定律。...............
前言最近在B站上看到一个漂亮的仙女姐姐跳舞视频,循环看了亿遍又亿遍,久久不能离开!看着小仙紫姐姐的蹦迪视频,除了一键三连还能做什么?突发奇想,能不能把舞蹈视频转成代码舞呢?说干就干,今天就手把手教大家如何把跳舞视频转成代码舞,跟着仙女姐姐一起蹦起来~视频来源:【紫颜】见过仙女蹦迪吗 【千盏】一、核心功能设计总体来说,我们需要分为以下几步完成:从B站上把小姐姐的视频下载下来对视频进行截取GIF,把截取的GIF通过ASCII Animator进行ASCII字符转换把转换的字符gif根据每
【Android App】实战项目之仿抖音的短视频分享App(附源码和演示视频 超详细必看)
前言这一篇博客应该是我花时间最多的一次了,从2022年1月底至2022年4月底。我已经将这篇博客的内容写为论文,上传至arxiv:https://arxiv.org/pdf/2204.10160.pdf欢迎大家指出我论文中的问题,特别是语法与用词问题在github上,我也上传了完整的项目:https://github.com/Whiffe/Custom-ava-dataset_Custom-Spatio-Temporally-Action-Video-Dataset关于自定义ava数据集,也是后台
因为我既对接过session、cookie,也对接过JWT,今年因为工作需要也对接了gtoken的2个版本,对这方面的理解还算深入。尤其是看到官方文档评论区又小伙伴表示看不懂,所以做了这期视频内容出来:视频在这里:本期内容对应B站的开源视频因为涉及的知识点比较多,视频内容比较长。如果你觉得看视频浪费时间,可以直接阅读源码:goframe v2版本集成gtokengoframe v1版本集成gtokengoframe v2版本集成jwtgoframe v2版本session登录官方调用示例文档jwt和sess
【Android App】实战项目之仿微信的私信和群聊App(附源码和演示视频 超详细必看)
用Android Studio的VideoView组件实现简单的本地视频播放器。本文将讲解如何使用Android视频播放器VideoView组件来播放本地视频和网络视频,实现起来还是比较简单的。VideoView组件的作用与ImageView类似,只是ImageView用于显示图片,VideoView用于播放视频。...
采用MATLAB对正弦信号,语音信号进行生成、采样和内插恢复,利用MATLAB工具箱对混杂噪声的音频信号进行滤波
随着移动互联网、云端存储等技术的快速发展,包含丰富信息的音频数据呈现几何级速率增长。这些海量数据在为人工分析带来困难的同时,也为音频认知、创新学习研究提供了数据基础。在本节中,我们通过构建生成模型来生成音频序列文件,从而进一步加深对序列数据处理问题的了解。
基于yolov5+deepsort+slowfast算法的视频实时行为检测。1. yolov5实现目标检测,确定目标坐标 2. deepsort实现目标跟踪,持续标注目标坐标 3. slowfast实现动作识别,并给出置信率 4. 用框持续框住目标,并将动作类别以及置信度显示在框上
数字电子钟设计本文主要完成数字电子钟的以下功能1、计时功能(24小时)2、秒表功能(一个按键实现开始暂停,另一个按键实现清零功能)3、闹钟功能(设置闹钟以及到时响10秒)4、校时功能5、其他功能(清零、加速、星期、八位数码管显示等)前排提示:前面几篇文章介绍过的内容就不详细介绍了,可以看我专栏的前几篇文章。PS.工程文件放在最后面总体设计本次设计主要是在前一篇文章 数字电子钟基本功能的实现 的基础上改编而成的,主要结构不变,分频器将50MHz分为较低的频率备用;dig_select
1.进入官网下载OBS stdioOpen Broadcaster Software | OBS (obsproject.com)2.下载一个插件,拓展OBS的虚拟摄像头功能链接:OBS 虚拟摄像头插件.zip_免费高速下载|百度网盘-分享无限制 (baidu.com)提取码:6656--来自百度网盘超级会员V1的分享**注意**该插件必须下载但OBS的根目录(应该是自动匹配了的)3.打开OBS,选中虚拟摄像头选择启用在底部添加一段视频录制选择下面,进行录制.
Meta公司在9月29日首次推出一款人工智能系统模型:Make-A-Video,可以从给定的文字提示生成短视频。基于**文本到图像生成技术的最新进展**,该技术旨在实现文本到视频的生成,可以仅用几个单词或几行文本生成异想天开、独一无二的视频,将无限的想象力带入生活
音频信号叠加噪声及滤波一、前言二、信号分析及加噪三、滤波去噪四、总结一、前言之前一直对硬件上的内容比较关注,但是可能是因为硬件方面的东西可能真的是比较杂,而且需要渗透的东西太多了,所以学习进展比较缓慢。因为也很少有单纯的硬件学习研究,总是会伴随着各种理论需要硬件做支撑,所以还是想要慢慢接触理论学习。但是之前总找不到切入点,不知道从哪里开始,就一直拖着。最近稍微接触了一点信号处理,就用这个当作切入点,开始接触理论学习。二、信号分析及加噪信号处理选用了matlab做工具,选了一个最简单的语音信号处理方
腾讯云 TRTC 实时音视频服务体验,从认识 TRTC 到 TRTC 的开发实践,Demo 演示& IM 服务搭建。
音乐音频分类技术能够基于音乐内容为音乐添加类别标签,在音乐资源的高效组织、检索和推荐等相关方面的研究和应用具有重要意义。传统的音乐分类方法大量使用了人工设计的声学特征,特征的设计需要音乐领域的知识,不同分类任务的特征往往并不通用。深度学习的出现给更好地解决音乐分类问题提供了新的思路,本文对基于深度学习的音乐音频分类方法进行了研究。首先将音乐的音频信号转换成声谱作为统一表示,避免了手工选取特征存在的问题,然后基于一维卷积构建了一种音乐分类模型。
C++知识精讲16 | 井字棋游戏(配资源+视频)【赋源码,双人对战】
本文主要讲解如何在Java中,使用FFmpeg进行视频的帧读取,并最终合并成Gif动态图。
在本篇博文中,我们谈及了 Swift 中 some、any 关键字以及主关联类型(primary associated types)的前世今生,并由浅及深用简明的示例向大家讲解了它们之间的奥秘玄机。