[01:48] <skay> any idea when the pure-python branch of lp:charms/python-django will land?
[01:49] <skay> I'd like to consider a change in how python-django pip installs things. I'd like it to be able to pip install from local wheels.
[15:04] <johnmce> Hi guys. Is anyone free to answer a quick question about the nova-cloud-controller charm and the new neutron-api charm?
[15:27] <mbruzek1> We have 30 minutes until the Juju Big Data UOS session.  Please attend if you are interested in BIG DATA
[15:27] <mbruzek1> http://summit.ubuntu.com/uos-1411/meeting/22392/big-data-and-juju/
[15:33] <gnuoy> johnmce, hi, what's the question?
[15:36] <johnmce> gnuoy: Hi. I'm upgrading my test cluster from Icehouse to Juno. HAd a few breakages along the way. Now deploying updated nova-cc charm and neutron-api. nova-cc charm fails to establish mysql relationship due to failed neutron db upgrade.
[15:37] <johnmce> Was wondering if nova-cc should still be fiddling with the neutron db in the presence of neutron-api
[15:38] <gnuoy> johnmce, unfortunately, yes. the nova-cc charm is in charge of running db migrations for neutron for os >= Juno
[15:39] <gnuoy> johnmce, from a juju pov there is a relation between nova-cc and mysql but the neutron db migration was not run, is that right ?
[15:39] <johnmce> gnuoy: OK, I don't suppose you would know how to work around for this failure. Command-line being run on nova-cc node is "/usr/bin/neutron-db-manage --config-file=/etc/neutron/neutron.conf --config-file=/etc/neutron/plugins/ml2/ml2_conf.ini upgrade head"
[15:40] <johnmce> gnuoy: Error is "sqlalchemy.exc.OperationalError: (OperationalError) (1050, "Table 'agents' already exists") '\nCREATE TABLE agents ......"
[15:40] <johnmce> gnuoy: So, I seem to have at least a partial schema update
[15:41] <gnuoy> johnmce, is it definitely partial ?
[15:41] <gnuoy> I mean, are you sure the migration hasn't run through?
[15:41] <johnmce> gnuoy: Well, I seem to have a table that it thinks I shouldn't have.
[15:42] <johnmce> gnuoy: Presumably that table didn't exist pre-Juno, yet I seem to have it.
[15:42] <johnmce> gnuoy: If I don't need a schema update (migration), then the logic that determines when an upgrade should be performed must be broken.
[15:43] <gnuoy> johnmce, that is possible. Do you see any other evidence that the neutron db is not in the state it should be?
[15:43] <gnuoy> errors from neutron-server etx
[15:44] <johnmce> gnuoy: I'm not familiar with the db changes required for a Juno migration, so I wouldn't know what to look for. I've not check the neutron server logs.
[15:45] <gnuoy> johnmce, I'm just wondering if everything is rosy but, as you say, the logic around when to run the migration is broken
[15:46] <johnmce> gnuoy: Things are pretty broken generally right now, so I'm just taking it one step at a time. Been fixing up other charms as I go.
[15:47] <johnmce> gnuoy: Keystone, glance, horizon are all good, but nova_cc can't get past this
[15:48] <gnuoy> johnmce, when you say it can't get passed it do you mean a juju hook keeps erroring ?
[15:49] <johnmce> gnuoy: I've destroyed the nova-cc service and re-deployed numerous times, so mayb the migration happened before. The juju hook for shaed-db always fails. "'hook failed: "shared-db-relation-changed" for percona-cluster:shared-db'"
[15:51] <gnuoy> johnmce, right, I see. Sounds like a charm bug that migrations fail if the schema is already present.
[15:52] <gnuoy> johnmce, I can work on trying to reproduce and getting a fix tomorrow.
[15:52] <gnuoy> johnmce, could you raise a bug report please ?
[15:54] <johnmce> gnuoy: I don't mind fixing it myself, if I can get a handle on what it's doing. Do you have any idea offhand what clues the charm looks for the decide a migration is needed, or does it just unconditionally call the neutron-db-manage script?
[15:54] <lazyPower> Greetings #juju - UOS 1114 - Big Data track is goign to start in about 6 minutes.  If you'd like to participate - https://plus.google.com/hangouts/_/hoaevent/AP36tYfh0sDqCTgtXmsp4LRdu4lnwysNlJ0jMTS7tlh8HNWfgen-Tw?authuser=0&hl=en
[15:54] <gnuoy> johnmce, I think it blindly call it on the establishment of a relation with percona
[15:54] <johnmce> gnuoy: Mybe the logic should be in neutron-db-manage?
[15:55] <gnuoy> johnmce, yes, I think that is true. I wonder if the nova db manage utility can be rerun without explosions
[15:57] <johnmce> gnuoy: I'll try commenting that 'create table' bit out for now and see if I get to to progress to completion. Looks like it's got some broken logic somewhere though.
[15:59] <gnuoy> johnmce, I'm not sure I'd want to comment that 'create table' out tbh, you might get some unexpected results from neutron-db-manage
[15:59] <gnuoy> I'd be temted to mark the failed hook as resolved and see if things continue from there
[16:00] <johnmce> gnuoy: I've already tried repeatedly to say it's resolved, but it always re-runs that script. You'd think there'd use an "if not exists" type option on table creation.
[16:01] <gnuoy> yeah, that's be good
[16:01] <gnuoy> s/that's/that'd/
[16:09] <johnmce> gnuoy: This goes well beyond a single table. It's trying to create every table from scratch, when they already exist.
[16:12] <jamespage> gnuoy, dosaboy: did you have any thoughts on tuning down the log level across the openstack charms?
[16:12] <jamespage> dosaboy, just thinking about it in the context of the ceph-broker work but we probably need to discuss more generally on what's the right level
[16:13] <jamespage> dosaboy, I personally think our default level should be DEBUG with only end-user usefull info going at INFO level
[16:16] <dosaboy> jamespage: agreed
[16:16] <dosaboy> same as openstack
[16:16] <jamespage> dosaboy, oo - do they have that documented somewhere?
[16:16] <dosaboy> hmm lemme dig
[16:17] <jamespage> dosaboy, so a end use message would be "Configuring ceph storage pools with name XX, replicas XX"
[16:17] <jamespage> and "Storage pools configured, starting cinder volume service"
[16:17] <jamespage> maybe
[16:17] <jamespage> not use
[16:17] <jamespage> not sure
[16:18] <gnuoy> johnmce, I'm wondering if the stamp didn't get run
[16:19] <dosaboy> jamespage: *some* info here - http://docs.openstack.org/openstack-ops/content/logging_monitoring.html
[16:19] <dosaboy> lemme find better
[16:19] <dosaboy> https://wiki.openstack.org/wiki/LoggingStandards
[16:20] <gnuoy> neutron-db-manage --config-file=/etc/neutron/neutron.conf --config-file=/etc/neutron/plugins/ml2/ml2_conf.ini stamp icehouse then upgrade head
[16:20] <dosaboy> jamespage: ^^
[16:21] <gnuoy> johnmce, ok, I think I see the bug
[16:22] <gnuoy> can you try running: neutron-db-manage --config-file=/etc/neutron/neutron.conf --config-file=/etc/neutron/plugins/ml2/ml2_conf.ini stamp icehouse and see if the migration then runs cleanly ?
[16:23] <johnmce> gnuoy: Not read this properly yet, but it is related: http://www.gossamer-threads.com/lists/openstack/dev/42070
[16:25] <johnmce> gnuoy: Response to command: INFO  [alembic.migration] Context impl MySQLImpl. INFO  [alembic.migration] Will assume non-transactional DDL.
[16:26] <gnuoy> johnmce, yes, that thread looks to be the same issue
[16:28] <johnmce> gnuoy: Seems to be working now!
[16:28] <johnmce> gnuoy: thanks for your help
[16:28] <gnuoy> johnmce, np, I will fix that in the charms tomorrow
[16:30] <gnuoy> johnmce, sorry that you hit a bug. fwiw the bug is with line 522 of hooks/nova_cc_utils.py. that conditional shouldn't be there and was left over from when nova-cc and neutron-api where deciding between them on who should run the migration
[16:41] <johnmce> gnuoy: OK, I see the problem now. Thanks for the info. I'll modify my copy.
[16:46] <whit> working on aws, does anyone ever get a machine that reports itself as one  public ip, but when you ssh in, ssh thinks it's a different public ip?
[16:52] <lazyPower> whit: i've seen this behavior before - whats weird was querying the metadata url caused the ip's to correct themselves.
[16:53] <whit> lazyPower, tell me more about this metadate url?
[16:53] <lazyPower> whit: http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/ec2-instance-metadata.html
[16:53] <whit> lazyPower, danke
[16:54] <lazyPower> whit: no idea what caused the voodoo tbh - it was an isolated incident from another user that joined #juju ~ a week ago.
[16:54] <whit> lazyPower, wonder if that force the route to get set?
[16:55] <lazyPower> well, a curl tot he metadata url shouldn't have any effect - thats all set during cloud init
[16:55] <whit> I've seen it 3 times in the last week
[16:55] <whit> dunno
[16:59] <jose> whit: status will give the public IP, ssh will use the internal IP
[16:59] <jose> whit: or at least that's the behaviour I'm seeing on AWS
[16:59] <jose> probably using the bootstrap node as a proxy
[17:34] <lazyPower> jose: juju ssh does
[17:34] <lazyPower> all connectivity between the workstation and nodes proxies through the state server
[17:34] <jose> there it is
[19:00] <lazyPower> Juju Open feedback session is about to get started
[19:00] <lazyPower> https://plus.google.com/hangouts/_/hoaevent/AP36tYdwa3d9A4ohXuoRdM-SQCQSJUzu4xgEflvg996V8rMyAw427g?authuser=0&hl=en
[19:01] <lazyPower> if you'd like to join and add your view/comments/feedback we'd love to hear from you
[22:06] <whit> cmars, the one thing that comes to mind is that it might make sense to not run the collectic-metrics hook if a charm does not define any metrics