基于kubespray的生产级kubernetes部署 -欧洲杯足彩官网

`
wiselyman
  • 浏览: 2076987 次
  • 性别:
  • 来自: 合肥
博主相关
  • 博客
  • 微博
  • 相册
  • 收藏
  • 博客专栏
    点睛spring4.1
    浏览量:80745
    点睛spring mvc4...
    浏览量:129940
    社区版块
    • ( 11)
    • ( 19)
    • ( 0)
    存档分类
    最新评论

    基于kubespray的生产级kubernetes部署

    源码地址:

    1. 前置条件

    ​ 安装exsi(vmware vsphere hypervisor,本例地址为:192.168.1.50):

    ​ 安装vcenter(vmware vcenter server appliance,本例地址为:192.168.1.51):

    使用u盘将vmware-vmvisor-installer-6.7.0.update03-14320388.x86_64.iso刻盘安装exsi系统到一台物理机,然后打开vmware-vcsa-all-6.7.0-14367737.iso。在vcsa-ui-installer目录下按照不同的系统使用对应的界面安装vcenter。

    当然如果你有直接的机器或者虚拟机也是可以的,我们只需要忽略和vmware相关的步骤即可,这次规划9台虚拟机(物理机),分别为3台master(ip:192.168.1.54-56),6台node(ip:192.168.1.57-62)。

    2. 准备centos系统模板

    • vsphere client下创建虚拟机目录,名称为k8s

    • vcenter下创建centos虚拟机:虚拟机名称为centos7

    • 安装centos 7.6虚拟机

    • 安装vmware tools

      yum install -y open-vm-tools
    • 安装govc:

      yum install wget
      wget https://github.com/vmware/govmomi/releases/download/prerelease-v0.21.0-58-g8d28646/govc_linux_amd64.gz
      gzip -d govc_linux_amd64.gz
      chmod  x govc_linux_amd64
      mv govc_linux_amd64 /usr/local/bin/govc
    • 配置govc

      vi .bash_profile 
      export govc_url='192.168.1.51'
      export govc_username='administrator@vsphere.local'
      export govc_password='wyf.38476'
      export govc_insecure=1
      source .bash_profile 
    • 激活虚拟机uuid,后面为虚拟机名称

      govc vm.change -e="disk.enableuuid=1" -vm='centos7'
    • 安装perl和nettools,ansible定制虚拟机ip需使用

      yum install perl net-tools
    • 关闭防火墙

       systemctl stop firewalld && systemctl disable firewalld
      
    • 时间同步

      yum -y install ntp  && ntpdate ntp1.aliyun.com && systemctl start ntpd && systemctl enable ntpd
      
    • 操作机免密登陆虚拟机

      产生公钥与私钥对:

      ssh-keygen
      

      将本机的公钥复制到远程机器的authorized_keys文件中:

      ssh-copy-id root@192.168.1.5*
    • 将虚拟机制成模板

    3. 准备虚拟机

    • 安装ansible

      pip3 install pyvmomi
      pip3 install ansible
    • 编写ansible playbook:vm.yml

      - hosts: 127.0.0.1
        connection: local
        become: root
        gather_facts: false
        serial: 1
        tasks:
          - name:  create master nodes
            vmware_guest:
              hostname: "{{ vcenter_hostname }}"
              username: "{{ vcenter_username }}"
              password: "{{ vcenter_password }}"
              validate_certs: no
              datacenter: "{{ datacenter }}"
              state: present
              folder: "{{ folder }}"
              template: "{{ template }}"
              name: "{{ item.key }}"
              cluster: "{{ cluster }}"
              disk:
                - size_gb: 30
                  type: thin
                  datastore: datastore1
              hardware:
                memory_mb: 2048
                num_cpus: 2
                scsi: paravirtual
              networks:
                - name: vm network
                  ip: "{{ item.value }}"
                  netmask: 255.255.255.0
                  gateway: 192.168.1.1
              wait_for_ip_address: true
              customization:
                dns_servers:
                  - 202.102.192.68
                  - 114.114.114.114
            with_dict: "{{ masters }}"
            delegate_to: localhost
          - name:  create workder nodes
            vmware_guest:
              hostname: "{{ vcenter_hostname }}"
              username: "{{ vcenter_username }}"
              password: "{{ vcenter_password }}"
              validate_certs: no
              datacenter: "{{ datacenter }}"
              state: present
              folder: "{{ folder }}"
              template: "{{ template }}"
              name: "{{ item.key }}"
              cluster: "{{ cluster }}"
              disk:
                - size_gb: 50
                  type: thin
                  datastore: datastore1
              hardware:
                memory_mb: 8192
                num_cpus: 4
                scsi: paravirtual
              networks:
                - name: vm network
                  ip: "{{ item.value }}"
                  netmask: 255.255.255.0
                  gateway: 192.168.1.1
              wait_for_ip_address: true
              customization:
                dns_servers:
                  - 202.102.192.68
                  - 114.114.114.114
            with_dict: "{{ workers }}"
            delegate_to: localhost
      

      变量信息:group_vars/all.yml

        vcenter_hostname: 192.168.1.51
        vcenter_username: administrator@vsphere.local
        vcenter_password: wyf.38476
        datacenter: datacenter
        folder: /k8s
        template: centos7
        vm_name: master1
        cluster:
        masters: {"master1":"192.168.1.54", "matser2":"192.168.1.55","master3":"192.168.1.56"}
        workers: {"node1":"192.168.1.57","node2":"192.168.1.58","node3":"192.168.1.59","node4":"192.168.1.60","node5":"192.168.1.61","node6":"192.168.1.62"}

    创建虚拟机:

    ansible-playbook vm.yml    

    4. 使用kubespray安装

    • 下载kubespray最新发布版:

    • 安装ansible依赖

      sudo pip3 install -r requirements.txt
    • 复制inventory

      cp -rfp inventory/sample inventory/mycluster
    • 使用inventory builder更新inventory文件

      declare -a ips=(192.168.1.54 192.168.1.55 1192.168.1.56 192.168.1.57 192.168.1.58 192.168.1.59 192.168.1.60 192.168.1.61 192.168.1.62)
      config_file=inventory/mycluster/hosts.yml python3 contrib/inventory_builder/inventory.py ${ips[@]}

      对生成inventory/mycluster/hosts.yml进行修改,主要调整每个虚拟机在集群中承担的角色,修改后的内容如下:

      all:
        hosts:
          master1:
            ansible_host: 192.168.1.54
            ip: 192.168.1.54
            access_ip: 192.168.1.54
            ansible_user: root
          master2:
            ansible_host: 192.168.1.55
            ip: 192.168.1.55
            access_ip: 192.168.1.55
            ansible_user: root
          master3:
            ansible_host: 192.168.1.56
            ip: 192.168.1.56
            access_ip: 192.168.1.56
            ansible_user: root
          node1:
            ansible_host: 192.168.1.57
            ip: 192.168.1.57
            access_ip: 192.168.1.57
            ansible_user: root
          node2:
            ansible_host: 192.168.1.58
            ip: 192.168.1.58
            access_ip: 192.168.1.58
            ansible_user: root
          node3:
            ansible_host: 192.168.1.59
            ip: 192.168.1.59
            access_ip: 192.168.1.59
            ansible_user: root
          node4:
            ansible_host: 192.168.1.60
            ip: 192.168.1.60
            access_ip: 192.168.1.60
            ansible_user: root
          node5:
            ansible_host: 192.168.1.61
            ip: 192.168.1.61
            access_ip: 192.168.1.61
            ansible_user: root
          node6:
            ansible_host: 192.168.1.62
            ip: 192.168.1.62
            access_ip: 192.168.1.62
            ansible_user: root
        children:
          kube-master:
            hosts:
              master1:
              master2:
              master3:
          kube-node:
            hosts:
              node1:
              node2:
              node3:
              node4:
              node5:
              node6:
          etcd:
            hosts:
              master1:
              master2:
              master3:
          k8s-cluster:
            children:
              kube-master:
              kube-node:
          calico-rr:
            hosts: {}
      
    • 设置vsphere做为cloud provider,修改inventory/mycluster/group_vars/all/all.yml,添加下面内容:

      cloud_provider: vsphere
      vsphere_vcenter_ip: "192.168.1.51"
      vsphere_vcenter_port: 443
      vsphere_insecure: 1
      vsphere_user: "administrator@vsphere.local"
      vsphere_password: "wyf.38476"
      vsphere_datacenter: "datacenter"
      vsphere_datastore: "datastore1"
      vsphere_working_dir: "k8s"
      vsphere_scsi_controller_type: "pvscsi"
    • 切换下载地址,加速安装

      inventory/mycluster/group_vars/k8s-cluster/k8s-cluster.yml

      # kubernetes image repo define
      kube_image_repo: "gcr.azk8s.cn/google-containers"

      roles/download/defaults/main.yml

      # gcr and kubernetes image repo define
      gcr_image_repo: "gcr.azk8s.cn/google-containers"

      roles/download/defaults/main.yml:按照如下替换

      docker.io -> dockerhub.azk8s.cn (其中官方镜像需要加library/)
      quay.io ->  quay.azk8s.cn
      gcr.io -> gcr.azk8s.cn
      k8s.gcr.io -> gcr.azk8s.cn/google-containers
      # download urls
      kubeadm_download_url: "https://storage.googleapis.com/kubernetes-release/release/{{ kubeadm_version }}/bin/linux/{{ image_arch }}/kubeadm"
      hyperkube_download_url: "https://storage.googleapis.com/kubernetes-release/release/{{ kube_version }}/bin/linux/{{ image_arch }}/hyperkube"
      etcd_download_url: "https://github.com/coreos/etcd/releases/download/{{ etcd_version }}/etcd-{{ etcd_version }}-linux-{{ image_arch }}.tar.gz"
      cni_download_url: "https://github.com/containernetworking/plugins/releases/download/{{ cni_version }}/cni-plugins-linux-{{ image_arch }}-{{ cni_version }}.tgz"
      calicoctl_download_url: "https://github.com/projectcalico/calicoctl/releases/download/{{ calico_ctl_version }}/calicoctl-linux-{{ image_arch }}"
      crictl_download_url: "https://github.com/kubernetes-sigs/cri-tools/releases/download/{{ crictl_version }}/crictl-{{ crictl_version }}-{{ ansible_system | lower }}-{{ image_arch }}.tar.gz"

      roles/container-engine/docker/defaults/main.yml

      # centos/redhat docker-ce repo
      docker_rh_repo_base_url: 'https://mirrors.aliyun.com/docker-ce/linux/centos/7/$basearch/stable'
      docker_rh_repo_gpgkey: 'https://mirrors.aliyun.com/docker-ce/linux/centos/gpg'
      # centos/redhat extras repo
      extras_rh_repo_base_url: "https://mirrors.aliyun.com/centos/$releasever/extras/$basearch/"
      extras_rh_repo_gpgkey: "https://mirrors.aliyun.com/centos/rpm-gpg-key-centos-7"
    • 安装helm服务端tiller

      roles/kubernetes-apps/helm/defaults/main.yml

      helm_enabled: true
    • 安装kubernetes集群

      ansible-playbook -i inventory/mycluster/hosts.yml --become --become-user=root cluster.yml

    5. 连接k8s集群

    • 安装kubectl

      macos的安装方式:

      brew install kubernetes-cli 

      windows安装:

    • 配置环境

      ssh登录到192.168.1.54 master节点复制~/.kube/config到本地;

    • dashboard访问登录地址

    • 通过代理访问dashboard地址

      kubectl proxy
      
    • 生成kubeconfig

      1. 新建admin-role.yml

        kind: clusterrolebinding
        apiversion: rbac.authorization.k8s.io/v1beta1
        metadata:
          name: admin
          annotations:
            rbac.authorization.kubernetes.io/autoupdate: "true"
        roleref:
          kind: clusterrole
          name: cluster-admin
          apigroup: rbac.authorization.k8s.io
        subjects:
          - kind: serviceaccount
            name: admin
            namespace: kube-system
        ---
        apiversion: v1
        kind: serviceaccount
        metadata:
          name: admin
          namespace: kube-system
          labels:
            kubernetes.io/cluster-service: "true"
            addonmanager.kubernetes.io/mode: reconcile
      2. 获得token

        kubectl create -f admin-role.yaml
        kubectl -n kube-system get secret|grep admin-token
        kubectl -n kube-system describe secret/admin-token-2qphr
      3. 将获得的token附加到config文件

        token: ....
      4. 在登录页面的“kubeconfig”选择config文件进行登录

      5. 在登录页面的“令牌”,直接粘贴token进行登录

    6 存储类配置

    基于vsphere的k8s存储,新建storage/vsphere-storage.yml

    kind: storageclass
    apiversion: storage.k8s.io/v1
    metadata:
      name: standard
      annotations:
        storageclass.kubernetes.io/is-default-class: "true"
    provisioner: kubernetes.io/vsphere-volume
    parameters:
      diskformat: zeroedthick
      datastore: datastore1
    kubectl create -f vsphere-storage.yml

    测试是否成功,新建storage/test-pvc.yml

     kind: persistentvolumeclaim
    apiversion: v1
    metadata:
      name: pvcsc-vsan
      annotations:
        volume.beta.kubernetes.io/storage-class: standard
    spec:
      accessmodes:
        - readwriteonce
      resources:
        requests:
          storage: 2gi
    kubectl create -f test-pvc.yml
    kubectl describe pvc pvcsc-vsan

    此时我们在vsphere client上查看:

    7 安装配置helm

    • 安装helm

      macos的安装方式:

    brew install kubernetes-helm

    ​ windows:使用chocolatey(安装参考:

    choco install kubernetes-helm
    • 初始化helm
    helm init --upgrade -i gcr.azk8s.cn/kubernetes-helm/tiller:v2.14.3 --stable-repo-url  https://kubernetes.oss-cn-hangzhou.aliyuncs.com/charts/
    helm repo update
    • 创建用户tiller用户
    kubectl create serviceaccount --namespace kube-system tiller
    kubectl create clusterrolebinding tiller-cluster-rule --clusterrole=cluster-admin --serviceaccount=kube-system:tiller
    kubectl patch deploy --namespace kube-system tiller-deploy -p '{"spec":{"template":{"spec":{"serviceaccount":"tille
    0
    0
    分享到:
    评论

    相关推荐

      整合 kubespray,nexus repository manager 和 rook,提供可离线部署 kubernetes 的一套 ansible playbooks。

      kubespray离线创建kubernetes并与portal集成管理.docx

      https://github.com/kubernetes-sigs/kubespray kubespray 使用 ansible 快速部署容器化 高可用 k8s 集群 环境 主机 内网ip 外网ip 系统 k8s-1 10.0.0.18 61.xxx.xxx.187 ubuntu 18.04 k8s-2 10.0.0.19 ...

      kubespray部署器 kubespray deployer是用于通过docker 轻松的包装器。 tl; dr # work/cluster_inventory.yml - hostname : master1 type : master - hostname : master2 type : master - hostname : worker1 ...

      原文链接:https://blog.csdn.net/m0_37814112/article/details/124467155?spm=1001.2014.3001.5501 说明:由于包超过1g,所以这里存储了云盘下载地址,永久有效!

      原文链接:https://blog.csdn.net/m0_37814112/article/details/124253201

      原文链接:...说明:使用kubespray-v2.18.1工具部署高可用1.22.8版本k8s涉及到的二进制文件和安装包,这里是通过nginx来实现内部下载的,只需要将download目录mv到/usr/share/nginx/html目录下即可

      原文链接:...

      原文链接:...说明:使用kubespray-v2.18.1工具部署高可用1.22.8版本k8s涉及到的二进制文件和安装包,这里是通过nginx来实现内部下载的,只需要将download目录mv到/usr/share/nginx/html目录下即可

      在aws上使用terraform部署高可用性kubernetes storidge集群 查看以获得kubespray文档该项目将创建: 具有storidge永久存储的高可用性kubernetes集群具有公共和私有子网的vpc 公共子网中的堡垒主机和nat网关专用子网...

      使用kubespray消除设置kubernetes集群的麻烦

      该项目的目的是部署基础结构,以便在类似于生产的环境中进行学习和测试。 我们创建了两个分支,一个用于kubernetesspray项目,另一个用于glusterfs,在此我们修复了检测到的一些问题,我们使用了两个分支来部署...

      ansible-kubespray.zip,部署生产就绪的kubernetes集群部署生产就绪的kubernetes集群,ansible是一个简单而强大的自动化引擎。它用于帮助配置管理、应用程序部署和任务自动化。

      原文链接:https://blog.csdn.net/m0_37814112/article/details/124433423 说明:由于包超过1g,所以这里存储了云盘下载地址,永久有效!

      部署可用于生产环境的kubernetes集群如有疑问,请查看kubespray.io上的文档,并加入欧洲杯足彩官网的kubernetes松弛频道#kubespray。 您可以在此处获得邀请。可以部署在aws,gce上,部署生产就绪的kubernetes集群如有疑问,请...

      原文连接:https://blog.csdn.net/m0_37814112/article/details/124398982

      部署生产就绪的kubernetes集群 如有疑问,请查看的文档,并加入欧洲杯足彩官网的频道#kubespray 。 您可以收到邀请可以部署在 ,gce, , , , (bare metal),oracle cloud infrastructure(experimental)或baremetal上高...

      它是一组工具,旨在轻松部署生产就绪的kubernetes集群。具体使用参见:《kubernetes的ha高可用容器化部署(使用kubespray) 以及使用pvc模式挂载ceph集群存储 》...

    global site tag (gtag.js) - google analytics
    网站地图