FailedConsole Output

Started by upstream project "doctor-verify-master" build number 151
originally caused by:
 Triggered by Gerrit: https://gerrit.opnfv.org/gerrit/50973
[EnvInject] - Loading node environment variables.
Building remotely on nokia-pod1 (nokia opnfv-sysinfo doctor-apex-x86_64) in workspace /home/jenkins/opnfv_slave_root/workspace/doctor-verify-all-apex-sample-x86_64-master
[ssh-agent] Looking for ssh-agent implementation...
[ssh-agent]   Exec ssh-agent (binary ssh-agent on a remote machine)
$ ssh-agent
SSH_AUTH_SOCK=/tmp/ssh-5jWps45MIwa0/agent.24391
SSH_AGENT_PID=24393
[ssh-agent] Started.
Running ssh-add (command line suppressed)
Identity added: /home/jenkins/opnfv_slave_root/workspace/doctor-verify-all-apex-sample-x86_64-master@tmp/private_key_1327086660466796526.key (/home/jenkins/opnfv_slave_root/workspace/doctor-verify-all-apex-sample-x86_64-master@tmp/private_key_1327086660466796526.key)
[ssh-agent] Using credentials jenkins-ci (Jenkins Master SSH)
using credential d42411ac011ad6f3dd2e1fa34eaa5d87f910eb2e
Wiping out workspace first.
Cloning the remote Git repository
Cloning repository https://gerrit.opnfv.org/gerrit/doctor
 > git init /home/jenkins/opnfv_slave_root/workspace/doctor-verify-all-apex-sample-x86_64-master # timeout=10
Fetching upstream changes from https://gerrit.opnfv.org/gerrit/doctor
 > git --version # timeout=10
using GIT_SSH to set credentials Jenkins Master SSH
 > git fetch --tags --progress https://gerrit.opnfv.org/gerrit/doctor +refs/heads/*:refs/remotes/origin/* # timeout=15
 > git config remote.origin.url https://gerrit.opnfv.org/gerrit/doctor # timeout=10
 > git config --add remote.origin.fetch +refs/heads/*:refs/remotes/origin/* # timeout=10
 > git config remote.origin.url https://gerrit.opnfv.org/gerrit/doctor # timeout=10
Fetching upstream changes from https://gerrit.opnfv.org/gerrit/doctor
using GIT_SSH to set credentials Jenkins Master SSH
 > git fetch --tags --progress https://gerrit.opnfv.org/gerrit/doctor refs/changes/73/50973/13 # timeout=15
Checking out Revision c942be45f70ec9f1a7dc78a58ba5ebd248eaf4ae (refs/changes/73/50973/13)
 > git config core.sparsecheckout # timeout=10
 > git checkout -f c942be45f70ec9f1a7dc78a58ba5ebd248eaf4ae # timeout=15
Commit message: "[Don't merge] test for CI"
 > git rev-parse FETCH_HEAD^{commit} # timeout=10
 > git rev-list --no-walk 7bba7d67beec7b3db26ab1319bf4080565053096 # timeout=10
No emails were triggered.
[doctor-verify-all-apex-sample-x86_64-master] $ /usr/bin/env bash /tmp/jenkins9080140043961096618.sh
Gathering IP information for Apex installer VM
 6     undercloud                     running
Installer VM detected
Installer ip is 192.168.122.40
fetch_os_creds.info: Fetching rc file...
fetch_os_creds.info: Verifying connectivity to 192.168.122.40...
fetch_os_creds.info: 192.168.122.40 is reachable!
fetch_os_creds.info: ... from Instack VM 192.168.122.40...
Warning: Permanently added '192.168.122.40' (ECDSA) to the list of known hosts.
-------- Credentials: --------
# Clear any old environment that may conflict.
for key in $( set | awk '{FS="="}  /^OS_/ {print $1}' ); do unset $key ; done
export OS_NO_CACHE=True
export COMPUTE_API_VERSION=1.1
export OS_USERNAME=admin
export no_proxy=,192.168.37.19,192.0.2.4
export OS_USER_DOMAIN_NAME=Default
export OS_VOLUME_API_VERSION=3
export OS_CLOUDNAME=overcloud
export OS_AUTH_URL=http://192.168.37.19:5000/v3
export NOVA_VERSION=1.1
export OS_IMAGE_API_VERSION=2
#export OS_PASSWORD=N9flPG9TD7lwww7vsVewDHyBK
export OS_PASSWORD=admin
export OS_PROJECT_DOMAIN_NAME=Default
export OS_IDENTITY_API_VERSION=3
export OS_PROJECT_NAME=admin
export OS_AUTH_TYPE=password
export PYTHONWARNINGS="ignore:Certificate has no, ignore:A true SSLContext object is not available"

# Add OS_CLOUDNAME to PS1
if [ -z "${CLOUDPROMPT_ENABLED:-}" ]; then
    export PS1=${PS1:-""}
    export PS1=\${OS_CLOUDNAME:+"(\$OS_CLOUDNAME)"}\ $PS1
    export CLOUDPROMPT_ENABLED=1
fi
export OS_PROJECT_ID=6e76c202356a46cbbd626ef7c00efc0c
export OS_TENANT_NAME=admin
export OS_REGION_NAME=regionOne
[doctor-verify-all-apex-sample-x86_64-master] $ /bin/sh -xe /tmp/jenkins531018218748597293.sh
+ source /home/jenkins/opnfv-openrc.sh
+++ set
+++ awk '{FS="="}  /^OS_/ {print $1}'
++ export OS_NO_CACHE=True
++ OS_NO_CACHE=True
++ export COMPUTE_API_VERSION=1.1
++ COMPUTE_API_VERSION=1.1
++ export OS_USERNAME=admin
++ OS_USERNAME=admin
++ export no_proxy=,192.168.37.19,192.0.2.4
++ no_proxy=,192.168.37.19,192.0.2.4
++ export OS_USER_DOMAIN_NAME=Default
++ OS_USER_DOMAIN_NAME=Default
++ export OS_VOLUME_API_VERSION=3
++ OS_VOLUME_API_VERSION=3
++ export OS_CLOUDNAME=overcloud
++ OS_CLOUDNAME=overcloud
++ export OS_AUTH_URL=http://192.168.37.19:5000/v3
++ OS_AUTH_URL=http://192.168.37.19:5000/v3
++ export NOVA_VERSION=1.1
++ NOVA_VERSION=1.1
++ export OS_IMAGE_API_VERSION=2
++ OS_IMAGE_API_VERSION=2
++ export OS_PASSWORD=admin
++ OS_PASSWORD=admin
++ export OS_PROJECT_DOMAIN_NAME=Default
++ OS_PROJECT_DOMAIN_NAME=Default
++ export OS_IDENTITY_API_VERSION=3
++ OS_IDENTITY_API_VERSION=3
++ export OS_PROJECT_NAME=admin
++ OS_PROJECT_NAME=admin
++ export OS_AUTH_TYPE=password
++ OS_AUTH_TYPE=password
++ export 'PYTHONWARNINGS=ignore:Certificate has no, ignore:A true SSLContext object is not available'
++ PYTHONWARNINGS='ignore:Certificate has no, ignore:A true SSLContext object is not available'
++ '[' -z '' ']'
++ export PS1=
++ PS1=
++ export 'PS1=${OS_CLOUDNAME:+($OS_CLOUDNAME)} '
++ PS1='${OS_CLOUDNAME:+($OS_CLOUDNAME)} '
++ export CLOUDPROMPT_ENABLED=1
++ CLOUDPROMPT_ENABLED=1
++ export OS_PROJECT_ID=6e76c202356a46cbbd626ef7c00efc0c
++ OS_PROJECT_ID=6e76c202356a46cbbd626ef7c00efc0c
++ export OS_TENANT_NAME=admin
++ OS_TENANT_NAME=admin
++ export OS_REGION_NAME=regionOne
++ OS_REGION_NAME=regionOne
+ '[' -f /home/jenkins/os_cacert ']'
+ source /home/jenkins/opnfv-installer.sh
++ export INSTALLER_TYPE=apex
++ INSTALLER_TYPE=apex
++ export INSTALLER_IP=192.168.122.40
++ INSTALLER_IP=192.168.122.40
++ export SSH_KEY=/home/jenkins/installer_key_file
++ SSH_KEY=/home/jenkins/installer_key_file
+ sudo -E tox -e py34
py34 create: /home/jenkins/opnfv_slave_root/workspace/doctor-verify-all-apex-sample-x86_64-master/.tox/py34
py34 installdeps: -r/home/jenkins/opnfv_slave_root/workspace/doctor-verify-all-apex-sample-x86_64-master/requirements.txt
py34 develop-inst: /home/jenkins/opnfv_slave_root/workspace/doctor-verify-all-apex-sample-x86_64-master
py34 installed: DEPRECATION: Python 3.4 support has been deprecated. pip 19.1 will be the last one supporting it. Please upgrade your Python as Python 3.4 won't be maintained after March 2019 (cf PEP 429).,amqp==2.2.1,aodhclient==0.9.0,appdirs==1.4.3,asn1crypto==0.22.0,Babel==2.3.4,bcrypt==3.1.3,cachetools==2.0.0,certifi==2017.4.17,cffi==1.10.0,chardet==3.0.4,click==6.7,cliff==2.8.3,cmd2==0.7.5,contextlib2==0.5.5,cryptography==2.0.2,debtcollector==1.17.2,deprecation==1.0.1,-e git+https://gerrit.opnfv.org/gerrit/doctor@c942be45f70ec9f1a7dc78a58ba5ebd248eaf4ae#egg=doctor_tests,enum-compat==0.0.2,eventlet==0.20.0,fasteners==0.14.1,flake8==2.5.5,Flask==0.12.2,futurist==1.3.2,greenlet==0.4.15,idna==2.5,iso8601==0.1.11,itsdangerous==0.24,Jinja2==2.9.6,jsonpatch==1.16,jsonpointer==1.10,jsonschema==2.6.0,keystoneauth1==3.1.0,kombu==4.1.0,MarkupSafe==1.0,mccabe==0.4.0,monotonic==1.3,msgpack-python==0.4.8,netaddr==0.7.19,netifaces==0.10.6,openstacksdk==0.9.17,os-client-config==1.28.0,osc-lib==1.7.1,oslo.concurrency==3.21.2,oslo.config==4.11.2,oslo.context==2.17.2,oslo.i18n==3.17.1,oslo.log==3.30.3,oslo.messaging==5.30.8,oslo.middleware==3.30.2,oslo.serialization==2.20.2,oslo.service==1.25.2,oslo.utils==3.28.4,oslo.versionedobjects==1.26.3,paramiko==2.2.1,Paste==2.0.3,PasteDeploy==1.5.2,pbr==3.1.1,pep8==1.7.1,pika==0.10.0,pika-pool==0.1.3,positional==1.1.2,prettytable==0.7.2,pyasn1==0.3.1,pycparser==2.18,pyflakes==1.0.0,pyinotify==0.9.6,PyNaCl==1.1.2,pyOpenSSL==17.2.0,pyparsing==2.2.0,pyperclip==1.5.27,python-ceilometerclient==2.9.0,python-cinderclient==3.1.1,python-congressclient==1.8.1,python-dateutil==2.6.1,python-glanceclient==2.8.0,python-heatclient==1.11.1,python-keystoneclient==3.13.0,python-neutronclient==6.5.0,python-novaclient==9.1.2,python-openstackclient==3.12.2,python-swiftclient==3.4.1,python-vitrageclient==1.4.1,pytz==2017.2,PyYAML==3.12,repoze.lru==0.6,requests==2.18.2,requestsexceptions==1.3.0,rfc3986==1.1.0,Routes==2.4.1,scp==0.10.2,simplejson==3.11.1,six==1.10.0,statsd==3.2.1,stevedore==1.25.2,tenacity==4.4.0,urllib3==1.22,vine==1.1.4,virtualenv==15.1.0,warlock==1.2.0,WebOb==1.7.3,Werkzeug==0.12.2,wrapt==1.10.10
py34 runtests: PYTHONHASHSEED='3233671212'
py34 runtests: commands[0] | doctor-test
/home/jenkins/opnfv_slave_root/workspace/doctor-verify-all-apex-sample-x86_64-master/.tox/py34/lib/python3.4/site-packages/paramiko/client.py:711: UserWarning: Unknown ssh-ed25519 host key for 192.168.122.40: b'3dc1628c6fe2819031608f780639f40a'
  key.get_fingerprint())))
2019-04-20 10:02:04,328 main.py 130 INFO   doctor test starting.......
2019-04-20 10:02:04,329 apex.py 43 INFO   Setup Apex installer start......
2019-04-20 10:02:04,329 base.py 113 INFO   Get SSH keys from apex installer......
2019-04-20 10:02:04,695 apex.py 67 INFO   Get overcloud config details from Apex installer......
2019-04-20 10:02:04,695 base.py 174 INFO   Run command=source stackrc; nova list | grep ' overcloud-' in apex installer......
2019-04-20 10:02:11,836 base.py 183 INFO   Output=['| 2d0aedd7-4e30-4fee-bbd2-0feb8b1078c4 | overcloud-controller-0  | ACTIVE | -          | Running     | ctlplane=192.0.2.9 |', '| 37ceec17-a130-4826-87f0-67699d3350f5 | overcloud-novacompute-0 | ACTIVE | -          | Running     | ctlplane=192.0.2.6 |', '| 416383de-15fb-477a-9a6c-f0b21df9be70 | overcloud-novacompute-1 | ACTIVE | -          | Running     | ctlplane=192.0.2.7 |', '| 0f025a92-3365-4c3b-951e-dab9a6d9ca22 | overcloud-novacompute-2 | ACTIVE | -          | Running     | ctlplane=192.0.2.3 |'] command=source stackrc; nova list | grep ' overcloud-' in apex installer
2019-04-20 10:02:11,837 base.py 188 INFO   Check command=grep docker /home/stack/deploy_command return in apex installer......
2019-04-20 10:02:11,923 base.py 191 INFO   return 0
2019-04-20 10:02:11,924 apex.py 80 INFO   controller_ips:['192.0.2.9']
2019-04-20 10:02:11,924 apex.py 81 INFO   compute_ips:['192.0.2.6', '192.0.2.7', '192.0.2.3']
2019-04-20 10:02:11,925 apex.py 82 INFO   use_containers:True
2019-04-20 10:02:13,388 apex.py 102 INFO   Set apply patches start......
/home/jenkins/opnfv_slave_root/workspace/doctor-verify-all-apex-sample-x86_64-master/.tox/py34/lib/python3.4/site-packages/paramiko/client.py:711: UserWarning: Unknown ssh-ed25519 host key for 192.0.2.9: b'b3b023c9e950672f64d119fecdae83fd'
  key.get_fingerprint())))
2019-04-20 10:02:13,966 base.py 218 INFO   Command sudo python set_config.py output ['Add event notifier in ceilometer', 'NOTE: add compute.instance.update to event_definitions.yaml', 'NOTE: add maintenance.scheduled to event_definitions.yaml', 'NOTE: add maintenance.host to event_definitions.yaml']
2019-04-20 10:02:15,409 base.py 218 INFO   Command sudo python restart_aodh.py output []
2019-04-20 10:02:15,534 base.py 218 INFO   Command sudo python set_compute_config.py output []
2019-04-20 10:02:18,475 apex.py 148 INFO   Set apply patches start......
/home/jenkins/opnfv_slave_root/workspace/doctor-verify-all-apex-sample-x86_64-master/.tox/py34/lib/python3.4/site-packages/paramiko/client.py:711: UserWarning: Unknown ssh-ed25519 host key for 192.0.2.6: b'5ffdaf1f169876d8c9ac845458601952'
  key.get_fingerprint())))
/home/jenkins/opnfv_slave_root/workspace/doctor-verify-all-apex-sample-x86_64-master/.tox/py34/lib/python3.4/site-packages/paramiko/client.py:711: UserWarning: Unknown ssh-ed25519 host key for 192.0.2.7: b'50858b8f4760d0c962bbdcb845047215'
  key.get_fingerprint())))
/home/jenkins/opnfv_slave_root/workspace/doctor-verify-all-apex-sample-x86_64-master/.tox/py34/lib/python3.4/site-packages/paramiko/client.py:711: UserWarning: Unknown ssh-ed25519 host key for 192.0.2.3: b'890a9b151267bafe34f53791df2c09f2'
  key.get_fingerprint())))
2019-04-20 10:02:18,839 base.py 218 INFO   Command sudo python set_compute_config.py output []
2019-04-20 10:02:18,918 base.py 218 INFO   Command sudo python set_compute_config.py output []
2019-04-20 10:02:18,975 base.py 218 INFO   Command sudo python set_compute_config.py output []
2019-04-20 10:02:23,702 base.py 63 INFO   Setup ssh stunnel in apex installer......
2019-04-20 10:02:23,703 base.py 76 INFO   tunnel for port 12346
2019-04-20 10:02:23,707 base.py 76 INFO   tunnel for port 12348
2019-04-20 10:02:23,712 base.py 76 INFO   tunnel for port 12345
2019-04-20 10:02:23,717 image.py 48 INFO   image create start......
2019-04-20 10:02:30,846 image.py 68 INFO   image create end......
2019-04-20 10:02:30,847 user.py 70 INFO   user create start......
2019-04-20 10:02:31,104 user.py 86 INFO   create project......
2019-04-20 10:02:31,465 user.py 95 INFO   test project <Project description=, domain_id=default, enabled=True, id=59f862d75efe443a8d012801dfd81e48, is_domain=False, links={'self': 'http://192.0.2.4:35357/v3/projects/59f862d75efe443a8d012801dfd81e48'}, name=doctor, parent_id=default, tags=[]>
2019-04-20 10:02:31,734 user.py 103 INFO   create user......
2019-04-20 10:02:32,350 user.py 113 INFO   test user <User domain_id=default, enabled=True, id=d9f17c35c828412aac24f7f416ce9177, links={'self': 'http://192.0.2.4:35357/v3/users/d9f17c35c828412aac24f7f416ce9177'}, name=doctor, options={}, password_expires_at=None>
2019-04-20 10:02:32,593 user.py 127 INFO   role _member_ already created......
2019-04-20 10:02:32,594 user.py 128 INFO   test role <Role domain_id=None, id=9fe2ff9ee4384b1894a90878d3e92bab, links={'self': 'http://192.0.2.4:35357/v3/roles/9fe2ff9ee4384b1894a90878d3e92bab'}, name=_member_>
2019-04-20 10:02:34,070 user.py 78 INFO   user create end......
2019-04-20 10:02:34,071 main.py 55 INFO   doctor fault management test starting.......
2019-04-20 10:02:35,019 fault_management.py 65 INFO   fault management setup......
2019-04-20 10:02:35,019 user.py 190 INFO   quota update start......
2019-04-20 10:02:35,019 user.py 206 INFO   default quota update start......
2019-04-20 10:02:36,004 user.py 217 INFO   user quota update start......
2019-04-20 10:02:36,598 user.py 230 INFO   quota update end......
2019-04-20 10:02:36,598 network.py 41 INFO   network create start.......
2019-04-20 10:02:39,152 network.py 47 INFO   network create end.......
2019-04-20 10:02:39,152 network.py 49 INFO   subnet create start.......
2019-04-20 10:02:40,160 network.py 58 INFO   subnet create end.......
2019-04-20 10:02:40,161 instance.py 52 INFO   instance create start......
2019-04-20 10:02:45,024 instance.py 74 INFO   instance create end......
2019-04-20 10:02:45,025 instance.py 93 INFO   wait for vm launch start......
2019-04-20 10:03:03,773 instance.py 111 INFO   wait for vm launch end......
2019-04-20 10:03:03,773 alarm.py 45 INFO   alarm create start......
2019-04-20 10:03:07,157 alarm.py 81 INFO   alarm create end......
2019-04-20 10:03:07,157 sample.py 85 INFO   sample inspector start......
2019-04-20 10:03:10,063 sample.py 26 INFO   sample consumer start......
 * Running on http://0.0.0.0:12345/ (Press CTRL+C to quit)
 * Running on http://0.0.0.0:12346/ (Press CTRL+C to quit)
2019-04-20 10:03:12,847 apex.py 86 INFO   Get host ip by hostname=overcloud-novacompute-2.opnfvlf.org from Apex installer......
2019-04-20 10:03:12,848 base.py 174 INFO   Run command=source stackrc; nova show overcloud-novacompute-2 | awk '/ ctlplane network /{print $5}' in apex installer......
2019-04-20 10:03:17,901 base.py 183 INFO   Output=['192.0.2.3'] command=source stackrc; nova show overcloud-novacompute-2 | awk '/ ctlplane network /{print $5}' in apex installer
2019-04-20 10:03:17,902 fault_management.py 118 INFO   Get host info(name:overcloud-novacompute-2.opnfvlf.org, ip:192.0.2.3) which vm(doctor_vm0) launched at
2019-04-20 10:03:17,903 sample.py 30 INFO   sample monitor start......
2019-04-20 10:03:17,904 sample.py 85 INFO   Starting Pinger host_name(overcloud-novacompute-2.opnfvlf.org), host_ip(192.0.2.3)
2019-04-20 10:05:17,906 fault_management.py 89 INFO   fault management start......
2019-04-20 10:05:17,907 base.py 113 INFO   Get SSH keys from apex installer......
2019-04-20 10:05:17,907 base.py 117 INFO   Already have SSH keys from apex installer......
2019-04-20 10:05:17,983 utils.py 91 INFO   Copy /home/jenkins/opnfv_slave_root/workspace/doctor-verify-all-apex-sample-x86_64-master/doctor_tests/disable_network.sh -> disable_network.sh
2019-04-20 10:05:18,224 utils.py 72 INFO   Executing: bash disable_network.sh > disable_network.log 2>&1 &
2019-04-20 10:05:18,276 utils.py 86 INFO   *** SUCCESSFULLY run command bash disable_network.sh > disable_network.log 2>&1 &
2019-04-20 10:05:18,277 fault_management.py 91 INFO   fault management end......
2019-04-20 10:05:19,414 sample.py 98 INFO   doctor monitor detected at 1555743919.4143598
2019-04-20 10:05:19,414 sample.py 41 INFO   sample monitor report error......
2019-04-20 10:05:19,420 sample.py 238 INFO   event posted in sample inspector at 1555743919.4203355
2019-04-20 10:05:19,420 sample.py 239 INFO   sample inspector = <doctor_tests.inspector.sample.SampleInspector object at 0x7faee28db3c8>
2019-04-20 10:05:19,421 sample.py 241 INFO   sample inspector received data = b'[{"details": {"monitor_event_id": "monitor_sample_event1", "hostname": "overcloud-novacompute-2.opnfvlf.org", "status": "down", "monitor": "monitor_sample"}, "time": "2019-04-20T10:05:19.414919", "type": "compute.host.down"}]'
2019-04-20 10:05:19,469 sample.py 196 INFO   doctor compute.instance.update vm(<Server: doctor_vm0>) error 1555743919.4689558
2019-04-20 10:05:19,522 sample.py 165 INFO   doctor mark host(overcloud-novacompute-2.opnfvlf.org) down at 1555743919.522112
2019-04-20 10:05:19,635 sample.py 58 INFO   doctor consumer notified at 1555743919.635077
2019-04-20 10:05:19,635 sample.py 61 INFO   sample consumer received data = {'previous': 'insufficient data', 'reason_data': {'event': {'message_id': '018eb44f-ad21-48d7-90f2-5cace78ad134', 'raw': {}, 'generated': '2019-04-20T07:05:19.422701', 'traits': [['resource_id', 1, '447718d7-891d-43e4-aff0-b0fe346e55af'], ['service', 1, 'sample'], ['state', 1, 'error'], ['project_id', 1, '59f862d75efe443a8d012801dfd81e48'], ['instance_id', 1, '447718d7-891d-43e4-aff0-b0fe346e55af'], ['tenant_id', 1, '59f862d75efe443a8d012801dfd81e48']], 'message_signature': 'c86277df0da4928e31a25921194a9f50daf869d7c0a600a4a56a7e01e4a2b7b9', 'event_type': 'compute.instance.update'}, 'type': 'event'}, 'current': 'alarm', 'severity': 'moderate', 'reason': 'Event <id=018eb44f-ad21-48d7-90f2-5cace78ad134,event_type=compute.instance.update> hits the query <query=[{"field": "traits.instance_id", "op": "eq", "type": "", "value": "447718d7-891d-43e4-aff0-b0fe346e55af"}, {"field": "traits.state", "op": "eq", "type": "", "value": "error"}]>.', 'alarm_id': '01cf40c4-ad5b-47fe-baa5-cd1510173cb9', 'alarm_name': 'doctor_alarm0'}
87.254.192.34 - - [20/Apr/2019 10:05:19] "POST /failure HTTP/1.1" 200 -
2019-04-20 10:05:19,643 sample.py 176 INFO   doctor mark vm(<Server: doctor_vm0>) error at 1555743919.6438055
87.254.192.34 - - [20/Apr/2019 10:05:19] "PUT /events HTTP/1.1" 200 -
2019-04-20 10:05:19,647 sample.py 101 INFO   ping timeout, quit monitoring...
2019-04-20 10:05:48,353 fault_management.py 185 INFO   doctor fault management notification_time=0.220717191696167
2019-04-20 10:05:48,354 fault_management.py 188 INFO   doctor fault management test successfully
2019-04-20 10:05:48,354 fault_management.py 198 INFO   run doctor fault management profile.......
2019-04-20 10:05:48,354 base.py 113 INFO   Get SSH keys from apex installer......
2019-04-20 10:05:48,355 base.py 117 INFO   Already have SSH keys from apex installer......
2019-04-20 10:05:55,441 utils.py 91 INFO   Copy disable_network.log -> /home/jenkins/opnfv_slave_root/workspace/doctor-verify-all-apex-sample-x86_64-master/doctor_tests/disable_network.log
2019-04-20 10:05:55,692 fault_management.py 155 INFO   Get the disable_netork.log fromdown_host(host_name:overcloud-novacompute-2.opnfvlf.org, host_ip:192.0.2.3)
2019-04-20 10:05:55,695 profiler_poc.py 97 INFO   
Total time cost: 104(ms)
==============================================================================>
       |Monitor|Inspector           |Controller|Notifier|Evaluator           |
       |-116   |108                 |?         |?       |?                   |
       |       |      |      |      |          |        |      |      |      |
link down:0    |      |      |      |          |        |      |      |      |
     raw failure:-116 |      |      |          |        |      |      |      |
         found affected:?    |      |          |        |      |      |      |
                  set VM error:113  |          |        |      |      |      |
                         marked host down:-8   |        |      |      |      |
                               notified VM error:?      |      |      |      |
                                        transformed event:?    |      |      |
                                                 evaluated event:?    |      |
                                                            fired alarm:?    |
                                                                received alarm:104  

2019-04-20 10:05:55,695 fault_management.py 94 INFO   fault management cleanup......
2019-04-20 10:05:55,695 fault_management.py 136 INFO   Already get the disable_netork.log from down_host......
2019-04-20 10:05:57,797 sample.py 91 INFO   sample inspector stop......
2019-04-20 10:05:57,839 sample.py 253 INFO   shutdown inspector app server at 1555743957.8393376
87.254.192.34 - - [20/Apr/2019 10:05:57] "POST /events/shutdown HTTP/1.1" 200 -
2019-04-20 10:05:57,842 sample.py 35 INFO   sample monitor stop......
2019-04-20 10:05:57,842 sample.py 108 INFO   Stopping Pinger host_name(overcloud-novacompute-2.opnfvlf.org), host_ip(192.0.2.3)
2019-04-20 10:05:57,843 sample.py 31 INFO   sample consumer stop......
2019-04-20 10:05:57,848 sample.py 66 INFO   shutdown consumer app server at 1555743957.8479548
87.254.192.34 - - [20/Apr/2019 10:05:57] "POST /shutdown HTTP/1.1" 200 -
2019-04-20 10:05:57,850 alarm.py 84 INFO   alarm delete start.......
2019-04-20 10:05:59,723 alarm.py 93 INFO   alarm delete end.......
2019-04-20 10:05:59,723 instance.py 77 INFO   instance delete start.......
2019-04-20 10:06:22,121 instance.py 90 INFO   instance delete end.......
2019-04-20 10:06:22,121 network.py 61 INFO   subnet delete start.......
2019-04-20 10:06:25,081 network.py 64 INFO   subnet delete end.......
2019-04-20 10:06:25,081 network.py 66 INFO   network delete start.......
2019-04-20 10:06:26,185 network.py 69 INFO   network delete end.......
2019-04-20 10:06:27,065 main.py 104 INFO   doctor maintenance test starting.......
2019-04-20 10:06:29,213 maintenance.py 62 INFO   checking hypervisors.......
2019-04-20 10:06:29,213 maintenance.py 95 INFO   testing 3 computes with 32 vcpus each
2019-04-20 10:06:29,213 maintenance.py 98 INFO   testing 2 actstdby and 4 noredundancy instances
2019-04-20 10:06:29,214 user.py 190 INFO   quota update start......
2019-04-20 10:06:29,214 user.py 206 INFO   default quota update start......
2019-04-20 10:06:29,277 user.py 217 INFO   user quota update start......
2019-04-20 10:06:29,554 user.py 230 INFO   quota update end......
2019-04-20 10:06:30,474 maintenance.py 117 INFO   creating maintenance stack.......
2019-04-20 10:06:30,474 maintenance.py 118 INFO   parameters: {'nonha_intances': 4, 'ha_intances': 2, 'maint_image': 'cirros', 'ext_net': 'external', 'flavor_vcpus': 16}
2019-04-20 10:12:42,285 stack.py 95 INFO   retry creating maintenance stack.......
2019-04-20 10:13:11,548 stack.py 65 INFO   stack doctor_test_maintenance DELETE_COMPLETE
2019-04-20 10:19:20,179 main.py 121 ERROR  doctor maintenance test failed, Exception=stack CREATE not completed within 5min, status: CREATE_IN_PROGRESS
2019-04-20 10:19:20,181 main.py 122 ERROR  Traceback (most recent call last):
  File "/home/jenkins/opnfv_slave_root/workspace/doctor-verify-all-apex-sample-x86_64-master/doctor_tests/stack.py", line 92, in create
    self.wait_stack_create()
  File "/home/jenkins/opnfv_slave_root/workspace/doctor-verify-all-apex-sample-x86_64-master/doctor_tests/stack.py", line 76, in wait_stack_create
    self._wait_stack_action_complete('CREATE')
  File "/home/jenkins/opnfv_slave_root/workspace/doctor-verify-all-apex-sample-x86_64-master/doctor_tests/stack.py", line 63, in _wait_stack_action_complete
    " %s" % (action, status))
Exception: stack CREATE not completed within 5min, status: CREATE_IN_PROGRESS

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
  File "/home/jenkins/opnfv_slave_root/workspace/doctor-verify-all-apex-sample-x86_64-master/doctor_tests/main.py", line 107, in test_maintenance
    maintenance.setup_maintenance(self.user)
  File "/home/jenkins/opnfv_slave_root/workspace/doctor-verify-all-apex-sample-x86_64-master/doctor_tests/scenario/maintenance.py", line 123, in setup_maintenance
    files=files)
  File "/home/jenkins/opnfv_slave_root/workspace/doctor-verify-all-apex-sample-x86_64-master/doctor_tests/stack.py", line 103, in create
    self.wait_stack_create()
  File "/home/jenkins/opnfv_slave_root/workspace/doctor-verify-all-apex-sample-x86_64-master/doctor_tests/stack.py", line 76, in wait_stack_create
    self._wait_stack_action_complete('CREATE')
  File "/home/jenkins/opnfv_slave_root/workspace/doctor-verify-all-apex-sample-x86_64-master/doctor_tests/stack.py", line 63, in _wait_stack_action_complete
    " %s" % (action, status))
Exception: stack CREATE not completed within 5min, status: CREATE_IN_PROGRESS

2019-04-20 10:19:20,182 sample.py 41 INFO   sample admin tool stop......
2019-04-20 10:19:20,182 sample.py 36 INFO   sample app manager stop......
2019-04-20 10:19:20,182 sample.py 91 INFO   sample inspector stop......
2019-04-20 10:19:20,183 maintenance.py 236 INFO   stack delete start.......
2019-04-20 10:19:42,478 stack.py 65 INFO   stack doctor_test_maintenance DELETE_COMPLETE
2019-04-20 10:19:42,479 apex.py 166 INFO   restore apply patches start......
2019-04-20 10:19:42,566 image.py 71 INFO   image delete start.......
2019-04-20 10:19:42,958 base.py 218 INFO   Command sudo python restore_compute_config.py output ['restoring nova.bak.']
2019-04-20 10:19:42,961 base.py 218 INFO   Command sudo python restore_config.py output ['restore', 'restore: /var/lib/config-data/puppet-generated/ceilometer/etc/ceilometer/event_definitions.yaml', 'Bak_file empty, so removing also: /var/lib/config-data/puppet-generated/ceilometer/etc/ceilometer/event_definitions.yaml']
2019-04-20 10:19:42,967 base.py 218 INFO   Command sudo python restore_compute_config.py output ['nova.bak does not exist.']
2019-04-20 10:19:44,222 image.py 76 INFO   image delete end.......
2019-04-20 10:19:44,222 user.py 163 INFO   user delete start......
2019-04-20 10:19:44,223 user.py 156 INFO   restore default quota......
2019-04-20 10:19:44,308 base.py 218 INFO   Command sudo python restore_aodh.py output []
2019-04-20 10:19:44,424 base.py 218 INFO   Command sudo python restore_compute_config.py output ['nova.bak does not exist.']
2019-04-20 10:19:46,056 user.py 187 INFO   user delete end......

Total time cost: 104(ms)
==============================================================================>
       |Monitor|Inspector           |Controller|Notifier|Evaluator           |
       |-116   |108                 |?         |?       |?                   |
       |       |      |      |      |          |        |      |      |      |
link down:0    |      |      |      |          |        |      |      |      |
     raw failure:-116 |      |      |          |        |      |      |      |
         found affected:?    |      |          |        |      |      |      |
                  set VM error:113  |          |        |      |      |      |
                         marked host down:-8   |        |      |      |      |
                               notified VM error:?      |      |      |      |
                                        transformed event:?    |      |      |
                                                 evaluated event:?    |      |
                                                            fired alarm:?    |
                                                                received alarm:104  

ERROR: InvocationError for command '/home/jenkins/opnfv_slave_root/workspace/doctor-verify-all-apex-sample-x86_64-master/.tox/py34/bin/doctor-test' (exited with code 1)
___________________________________ summary ____________________________________
ERROR:   py34: commands failed
Build step 'Execute shell' marked build as failure
$ ssh-agent -k
unset SSH_AUTH_SOCK;
unset SSH_AGENT_PID;
echo Agent pid 24393 killed;
[ssh-agent] Stopped.
Archiving artifacts
Email was triggered for: Failure - Any
Sending email for trigger: Failure - Any
Request made to compress build log
Sending email to: tbramwell@linuxfoundation.org agardner@linuxfoundation.org rgrigar@linuxfoundation.org
[WS-CLEANUP] Deleting project workspace...
[WS-CLEANUP] Deferred wipeout is used...
[WS-CLEANUP] done
Finished: FAILURE