=== kadams54-away is now known as kadams54
skayany idea when the pure-python branch of lp:charms/python-django will land?01:48
skayI'd like to consider a change in how python-django pip installs things. I'd like it to be able to pip install from local wheels.01:49
=== kadams54 is now known as kadams54-away
=== kadams54-away is now known as kadams54
=== kadams54 is now known as kadams54-away
=== kadams54-away is now known as kadams54
=== kadams54 is now known as kadams54-away
=== roadmr is now known as roadmr_afk
=== roadmr_afk is now known as roadmr
=== kadams54-away is now known as kadams54
=== kadams54 is now known as kadams54-away
=== jseutter_ is now known as jseutter
=== rsynnest_ is now known as rsynnest
=== roadmr is now known as roadmr_afk
=== roadmr_afk is now known as roadmr
=== roadmr is now known as roadmr_afk
=== CyberJacob|Away is now known as CyberJacob
=== roadmr_afk is now known as roadmr
=== alexlist` is now known as alexlist
=== roadmr is now known as roadmr_afk
=== roadmr_afk is now known as roadmr
=== underyx|off is now known as underyx
johnmceHi guys. Is anyone free to answer a quick question about the nova-cloud-controller charm and the new neutron-api charm?15:04
mbruzek1We have 30 minutes until the Juju Big Data UOS session.  Please attend if you are interested in BIG DATA15:27
gnuoyjohnmce, hi, what's the question?15:33
johnmcegnuoy: Hi. I'm upgrading my test cluster from Icehouse to Juno. HAd a few breakages along the way. Now deploying updated nova-cc charm and neutron-api. nova-cc charm fails to establish mysql relationship due to failed neutron db upgrade.15:36
johnmceWas wondering if nova-cc should still be fiddling with the neutron db in the presence of neutron-api15:37
gnuoyjohnmce, unfortunately, yes. the nova-cc charm is in charge of running db migrations for neutron for os >= Juno15:38
gnuoyjohnmce, from a juju pov there is a relation between nova-cc and mysql but the neutron db migration was not run, is that right ?15:39
johnmcegnuoy: OK, I don't suppose you would know how to work around for this failure. Command-line being run on nova-cc node is "/usr/bin/neutron-db-manage --config-file=/etc/neutron/neutron.conf --config-file=/etc/neutron/plugins/ml2/ml2_conf.ini upgrade head"15:39
johnmcegnuoy: Error is "sqlalchemy.exc.OperationalError: (OperationalError) (1050, "Table 'agents' already exists") '\nCREATE TABLE agents ......"15:40
johnmcegnuoy: So, I seem to have at least a partial schema update15:40
gnuoyjohnmce, is it definitely partial ?15:41
gnuoyI mean, are you sure the migration hasn't run through?15:41
johnmcegnuoy: Well, I seem to have a table that it thinks I shouldn't have.15:41
johnmcegnuoy: Presumably that table didn't exist pre-Juno, yet I seem to have it.15:42
johnmcegnuoy: If I don't need a schema update (migration), then the logic that determines when an upgrade should be performed must be broken.15:42
gnuoyjohnmce, that is possible. Do you see any other evidence that the neutron db is not in the state it should be?15:43
gnuoyerrors from neutron-server etx15:43
johnmcegnuoy: I'm not familiar with the db changes required for a Juno migration, so I wouldn't know what to look for. I've not check the neutron server logs.15:44
gnuoyjohnmce, I'm just wondering if everything is rosy but, as you say, the logic around when to run the migration is broken15:45
johnmcegnuoy: Things are pretty broken generally right now, so I'm just taking it one step at a time. Been fixing up other charms as I go.15:46
johnmcegnuoy: Keystone, glance, horizon are all good, but nova_cc can't get past this15:47
gnuoyjohnmce, when you say it can't get passed it do you mean a juju hook keeps erroring ?15:48
johnmcegnuoy: I've destroyed the nova-cc service and re-deployed numerous times, so mayb the migration happened before. The juju hook for shaed-db always fails. "'hook failed: "shared-db-relation-changed" for percona-cluster:shared-db'"15:49
gnuoyjohnmce, right, I see. Sounds like a charm bug that migrations fail if the schema is already present.15:51
gnuoyjohnmce, I can work on trying to reproduce and getting a fix tomorrow.15:52
gnuoyjohnmce, could you raise a bug report please ?15:52
=== Spads_ is now known as Spads
johnmcegnuoy: I don't mind fixing it myself, if I can get a handle on what it's doing. Do you have any idea offhand what clues the charm looks for the decide a migration is needed, or does it just unconditionally call the neutron-db-manage script?15:54
lazyPowerGreetings #juju - UOS 1114 - Big Data track is goign to start in about 6 minutes.  If you'd like to participate - https://plus.google.com/hangouts/_/hoaevent/AP36tYfh0sDqCTgtXmsp4LRdu4lnwysNlJ0jMTS7tlh8HNWfgen-Tw?authuser=0&hl=en15:54
gnuoyjohnmce, I think it blindly call it on the establishment of a relation with percona15:54
johnmcegnuoy: Mybe the logic should be in neutron-db-manage?15:54
gnuoyjohnmce, yes, I think that is true. I wonder if the nova db manage utility can be rerun without explosions15:55
johnmcegnuoy: I'll try commenting that 'create table' bit out for now and see if I get to to progress to completion. Looks like it's got some broken logic somewhere though.15:57
gnuoyjohnmce, I'm not sure I'd want to comment that 'create table' out tbh, you might get some unexpected results from neutron-db-manage15:59
gnuoyI'd be temted to mark the failed hook as resolved and see if things continue from there15:59
johnmcegnuoy: I've already tried repeatedly to say it's resolved, but it always re-runs that script. You'd think there'd use an "if not exists" type option on table creation.16:00
gnuoyyeah, that's be good16:01
johnmcegnuoy: This goes well beyond a single table. It's trying to create every table from scratch, when they already exist.16:09
jamespagegnuoy, dosaboy: did you have any thoughts on tuning down the log level across the openstack charms?16:12
jamespagedosaboy, just thinking about it in the context of the ceph-broker work but we probably need to discuss more generally on what's the right level16:12
jamespagedosaboy, I personally think our default level should be DEBUG with only end-user usefull info going at INFO level16:13
dosaboyjamespage: agreed16:16
dosaboysame as openstack16:16
jamespagedosaboy, oo - do they have that documented somewhere?16:16
dosaboyhmm lemme dig16:16
jamespagedosaboy, so a end use message would be "Configuring ceph storage pools with name XX, replicas XX"16:17
jamespageand "Storage pools configured, starting cinder volume service"16:17
jamespagenot use16:17
jamespagenot sure16:17
gnuoyjohnmce, I'm wondering if the stamp didn't get run16:18
dosaboyjamespage: *some* info here - http://docs.openstack.org/openstack-ops/content/logging_monitoring.html16:19
dosaboylemme find better16:19
gnuoyneutron-db-manage --config-file=/etc/neutron/neutron.conf --config-file=/etc/neutron/plugins/ml2/ml2_conf.ini stamp icehouse then upgrade head16:20
dosaboyjamespage: ^^16:20
gnuoyjohnmce, ok, I think I see the bug16:21
gnuoycan you try running: neutron-db-manage --config-file=/etc/neutron/neutron.conf --config-file=/etc/neutron/plugins/ml2/ml2_conf.ini stamp icehouse and see if the migration then runs cleanly ?16:22
johnmcegnuoy: Not read this properly yet, but it is related: http://www.gossamer-threads.com/lists/openstack/dev/4207016:23
johnmcegnuoy: Response to command: INFO  [alembic.migration] Context impl MySQLImpl. INFO  [alembic.migration] Will assume non-transactional DDL.16:25
gnuoyjohnmce, yes, that thread looks to be the same issue16:26
johnmcegnuoy: Seems to be working now!16:28
johnmcegnuoy: thanks for your help16:28
gnuoyjohnmce, np, I will fix that in the charms tomorrow16:28
gnuoyjohnmce, sorry that you hit a bug. fwiw the bug is with line 522 of hooks/nova_cc_utils.py. that conditional shouldn't be there and was left over from when nova-cc and neutron-api where deciding between them on who should run the migration16:30
johnmcegnuoy: OK, I see the problem now. Thanks for the info. I'll modify my copy.16:41
whitworking on aws, does anyone ever get a machine that reports itself as one  public ip, but when you ssh in, ssh thinks it's a different public ip?16:46
lazyPowerwhit: i've seen this behavior before - whats weird was querying the metadata url caused the ip's to correct themselves.16:52
whitlazyPower, tell me more about this metadate url?16:53
lazyPowerwhit: http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/ec2-instance-metadata.html16:53
whitlazyPower, danke16:53
lazyPowerwhit: no idea what caused the voodoo tbh - it was an isolated incident from another user that joined #juju ~ a week ago.16:54
whitlazyPower, wonder if that force the route to get set?16:54
=== kadams54 is now known as kadams54-away
lazyPowerwell, a curl tot he metadata url shouldn't have any effect - thats all set during cloud init16:55
whitI've seen it 3 times in the last week16:55
=== kadams54-away is now known as kadams54
=== kadams54 is now known as kadams54-away
josewhit: status will give the public IP, ssh will use the internal IP16:59
josewhit: or at least that's the behaviour I'm seeing on AWS16:59
joseprobably using the bootstrap node as a proxy16:59
=== scuttle|afk is now known as scuttlemonkey
lazyPowerjose: juju ssh does17:34
lazyPowerall connectivity between the workstation and nodes proxies through the state server17:34
josethere it is17:34
=== kadams54-away is now known as kadams54
=== rcj` is now known as rcj
=== jog_ is now known as jog
=== Spads_ is now known as Spads
=== Spads_ is now known as Spads
lazyPowerJuju Open feedback session is about to get started19:00
lazyPowerif you'd like to join and add your view/comments/feedback we'd love to hear from you19:01
=== kadams54 is now known as kadams54-away
=== Spads_ is now known as Spads
whitcmars, the one thing that comes to mind is that it might make sense to not run the collectic-metrics hook if a charm does not define any metrics22:06
=== kadams54-away is now known as kadams54
=== CyberJacob is now known as CyberJacob|Away
=== kadams54 is now known as kadams54-away

Generated by irclog2html.py 2.7 by Marius Gedminas - find it at mg.pov.lt!