[00:00] <RoyK> conrmahr: dd if=/dev/zero of=/dev/thatdevice bs=1M count=1
[00:02] <conrmahr> that removed the RAID array?
[00:03] <conrmahr> now i want to format with zfs
[00:03] <Kallis> can anyone help me authenticate to my samba server via ldap on a windows server please ?
[00:04] <RoyK> first mdadm --stop the raid
[00:04] <RoyK> then mdadm --zero-superblock those devices
[00:04] <RoyK> then zpool create yourpool .....
[00:04] <conrmahr> Did that, but it says Cannot get exclusive access to /dev/md2
[00:04] <RoyK> do you have a vg on that?
[00:05] <conrmahr> yes
[00:05] <RoyK> then vgremove
[00:05] <RoyK> pvremove
[00:05] <RoyK> etc
[00:05] <conrmahr> i don't know what it means
[00:05] <conrmahr> but i have it
[00:05] <conrmahr> in the fdisk -l
[00:05] <RoyK> pastebin "pvs;lvs;vgs"
[00:07] <conrmahr> do i have to install a pkg for pastebin?
[00:07] <RoyK> !pastebinit
[00:07] <conrmahr> oh wait i'm stupid
[00:09] <conrmahr> http://paste.ubuntu.com/16450099/
[00:10] <RoyK> you haven't stopped the raid, have you?
[00:10] <RoyK> cat /proc/mdstat
[00:11] <conrmahr> no
[00:11] <conrmahr> it says i don't have special access
[00:11] <RoyK> make sure you use mdadm --zero-superblock on those disks before you create a zpool
[00:11] <RoyK> well, vgremove, pvremove
[00:12] <RoyK> and perhaps umount the filesystem first
[00:12] <conrmahr> i don't know the cmd line
[00:13] <conrmahr> sudo vgremove /dev/md2
[00:13] <conrmahr> ?
[00:13] <RoyK> pastebin output of "mount" and "lvs"
[00:14] <conrmahr> http://paste.ubuntu.com/16450137/
[00:15] <conrmahr> sudo lvs = no volume groups found
[00:15] <RoyK> well, you have a zpool
[00:16] <RoyK> pastebin lsblk output
[00:16] <RoyK> and zpool status
[00:17] <conrmahr> just fyi, i have 1 SSD 14.04 UBNT disk, and 2 WD Red 4GB (just for data disk)
[00:17] <conrmahr> second one i'm trying to clean
[00:17] <RoyK> please do as I asked
[00:18] <conrmahr> http://paste.ubuntu.com/16450202/
[00:18] <conrmahr> and yes I am an idiot
[00:19] <RoyK> mdadm --stop /dev/md2
[00:19] <RoyK> mdadm --zero-superblock /dev/sdc5
[00:20] <RoyK> zpool attach --force data1 sdb1 sdc1
[00:20] <RoyK> or something like that
[00:20] <conrmahr> mdadm: Cannot get exclusive access to /dev/md2:Perhaps a running process, mounted filesystem or active volume group?
[00:20] <RoyK> you may want to dd a few zeros over that disk first
[00:20] <RoyK> what about vgs?
[00:20] <RoyK> pvs?
[00:20] <RoyK> lvs?
[00:21] <RoyK> vgs first
[00:21] <conrmahr> sudo vgs
[00:21] <conrmahr> ?
[00:21] <RoyK> yes
[00:21] <conrmahr> No groups found
[00:21] <RoyK> pvs?
[00:22] <conrmahr> nothing lists
[00:22] <conrmahr> lvs no groups found
[00:22] <RoyK> dd a bunch of zeros over sdc
[00:22] <RoyK> what's strange is that lsblk lists is as a member of lvm
[00:22] <conrmahr> well this drive
[00:23] <conrmahr> i took out of a Synology Diskstation
[00:23] <RoyK> well, just dd over it
[00:23] <conrmahr> so what now
[00:23] <conrmahr> sudo dd if=/dev/zero of=/dev/md2 bs=1M count=1
[00:23] <RoyK> no
[00:24] <RoyK> the disk, not md2
[00:24] <RoyK> md2 was the raid
[00:24] <conrmahr> ok
[00:24] <RoyK> just don't overwrite sda or something :P
[00:24] <bindi> what do you want to do
[00:25] <bindi> dmraid -r -E /dev/sdc if you want to remove raid flag
[00:25] <RoyK> doesn't matter
[00:25] <RoyK> dmraid?
[00:25] <RoyK> that's OT in here
[00:26] <conrmahr> sudo dd if=/dev/zero of=/dev/ sdc=1M count=1 ?
[00:26] <bindi> that's what i had to do a few times to get raid disk recognized :p
[00:26] <RoyK> conrmahr: to sdc, probably
[00:27] <RoyK> conrmahr: that wipes the first 1MB of the disk
[00:27] <conrmahr> ok i did
[00:27] <RoyK> conrmahr: I won't post the full commandline - that's not accepted
[00:27] <RoyK> conrmahr: ok - try lsblk again
[00:28] <bindi> not accepted? lol
[00:28] <conrmahr> sudo dd if=/dev/zero of=/dev/sdc bs=1M count=1
[00:28] <RoyK> bindi: just because someone may do that unintentionally
[00:28] <bindi> :|
[00:28] <conrmahr> look the same
[00:28] <RoyK> conrmahr: give it a reboot
[00:29] <conrmahr> shutdown -r
[00:29] <RoyK> or just "reboot" :P
[00:29] <conrmahr> oh i forgot simple reboot
[00:29] <bindi> init 6
[00:29] <bindi> :P
[00:29] <conrmahr> :)
[00:30] <conrmahr> ok looks like it's removed!
[00:30] <RoyK> the "reboot" command has been around for almost 10 years ;)
[00:30] <conrmahr> sdc      8:32   0   3.7T  0 disk
[00:30] <conrmahr> only
[00:30] <RoyK> ok - so do you want those mirrored?
[00:30] <conrmahr> yeah it's just a backup
[00:31] <RoyK> zpool attach data1 sdb sdc
[00:31] <RoyK> should work
[00:31] <bindi> ugh
[00:31] <RoyK> perhaps with an -f
[00:31] <bindi> creating a new pool?
[00:31] <conrmahr> i used zfs to format the first one
[00:31] <RoyK> just attach the other disk
[00:32] <RoyK> it'll become a mirror
[00:32] <bindi> zpool create -f -m /mnt/mypool mypool mirror ata-1234 ata-2345
[00:32] <bindi> ? :P
[00:32] <bindi> ls /dev/disk/by-id
[00:32] <bindi> you really should do it by-id
[00:32] <RoyK> doesn't matter
[00:32] <bindi> sure it does
[00:32] <RoyK> no. it. does. not.
[00:32] <bindi> if you use a script to mount them then it does :p
[00:32] <RoyK> zfs will revert to using by-id names
[00:33] <conrmahr> nvalid vdev specification
[00:33] <conrmahr> use '-f' to override the following errors:
[00:33] <conrmahr> /dev/sdc contains a corrupt primary EFI label.
[00:33] <RoyK> conrmahr: as I said, you may need -f
[00:33] <conrmahr> i think i need to format it first right?
[00:33] <RoyK> no
[00:33] <RoyK> just use -f
[00:34] <conrmahr> ok
[00:34] <RoyK> that will create the EFI label
[00:34] <RoyK> there's nothing like formatting anymore
[00:34] <conrmahr> look like it did something
[00:34] <conrmahr> how do i name the drive?
[00:34] <RoyK> conrmahr: try zpool status
[00:34] <conrmahr> or mount it
[00:34] <RoyK> conrmahr: it's mounted under /data1
[00:35] <RoyK> conrmahr: perhaps name that "data"
[00:35] <RoyK> just export it and reimport it as "data"
[00:35] <conrmahr> http://paste.ubuntu.com/16450521/
[00:35] <RoyK> you'll have subvolumes for that
[00:35] <RoyK> goodie
[00:35] <RoyK> looks good
[00:36] <conrmahr> beautiful
[00:36] <conrmahr> so i don't need to name it /data2
[00:36] <RoyK> if you want to rename it, export the pool and import it with a new name
[00:36] <Sachiru> Is there really no way to get PHP5 working on Ubuntu 16.04?
[00:37] <Sachiru> There are a lot of things that I can't get to work on it right now, because they all want PHP5
[00:37] <conrmahr> if i new how to do that
[00:37] <conrmahr> knew
[00:37] <RoyK> conrmahr: zpool export data1
[00:37] <bindi> Sachiru: http://askubuntu.com/questions/756879/cant-install-php5-on-ubuntu-16-04
[00:37] <RoyK> conrmahr: zpool import data1 mynewname
[00:37] <bindi> check out the ppa
[00:39] <conrmahr> RoyK: so this will name the first drive /data1 and the second /data2?
[00:41] <Sachiru> Sigh, thanks. So much is broken with PHP7 (omdistro, librenms, etc.)
[00:42] <Sachiru> But I guess that isn't ubuntu's fault, rather, they don't move to PHP7 fast enough.
[00:42] <RoyK> conrmahr: no - the two drives are mirrored
[00:42] <RoyK> conrmahr: meaning when one of then dies, the data persists
[00:42] <conrmahr> so i'll just keep it as data1
[00:42] <conrmahr> no need to change
[00:43] <Sachiru> Ooh, we have ZFS discussion here?
[00:43] <Sachiru> Nice!
[00:43] <RoyK> conrmahr: if you want to stripe them, which I won't recommend, just detach sdc and add it
[00:43] <RoyK> but don't do that, really
[00:43] <conrmahr> Sachiru: Not really I used it to setup my first drive
[00:44] <conrmahr> yeah I like to mirror, that's all I wanted anyway
[00:44] <conrmahr> is it clustering the data?
[00:44] <RoyK> drives die, the silicon wants to back to the mountains
[00:44] <RoyK> it's all natural
[00:44] <Sachiru> conrmahr: Define "clustering"
[00:45] <Sachiru> Because it means many different things, some of which apply to zfs, some of which do not.
[00:45] <RoyK> conrmahr: it's not clustering, that's far larger
[00:45] <conrmahr> file gets written to Disk1, then soon after it its copied to Disk2
[00:46] <RoyK> conrmahr: no, they are written to both at the same time
[00:46] <conrmahr> i only know clustering from the work i've done on MariaDB between to server databases
[00:46] <RoyK> conrmahr: and when one drive files (not "if"), the data is available
[00:46] <conrmahr> ok even better
[00:47] <conrmahr> how would you know if one drive fails/
[00:47] <RoyK> s/files/fails/
[00:47] <RoyK> zpool status shows that
[00:47] <RoyK> or use some monitoring
[00:47] <RoyK> smartmontools is good at that
[00:47] <conrmahr> do i have to apt-get that?
[00:48] <RoyK> install smartmontools and it installs smartd which can send you emails when something goes wrong
[00:48] <RoyK> yes
[00:48] <conrmahr> awesome
[00:49] <Sachiru> ZFS can detect if a disk fails (as in whole disk fails), or a certain sector/cluster on the disk fails
[00:49] <Sachiru> If the cluster fails, ZFS can detect if it can repair it or not, and auto-repairs if it is repairable
[00:49] <Sachiru> If not, it reports an unrepairable failure via "zpool status <poolname>"
[00:49] <RoyK> Sachiru: sometimes smartmontools is good also, to detect pre-failures
[00:50] <Sachiru> You can also initiate a scrub ("sudo zpool scrub <poolname>"). This reads all data written to disk, checks against checksum, and repairs repairable errors.
[00:50] <conrmahr> this is great
[00:50] <RoyK> smart stuff doesn't always work as intended, but according to google disk stats, almost 50% of the failures gave smart errors before dying
[00:51] <RoyK> failures as in smart errors
[00:51] <Sachiru> Additionally, (if you have Ubuntu installed onto a ZFS dataset as root), you can snapshot and revert easily. Thus, you can do "apt-get dist-upgrade" and other potentially destructive features without fear, since you can just snapshot the dataset, and reboot to the snapshot if it fails.
[00:52] <Sachiru> Same thing with if you want to install anything. Also, snapshots take at most 5 seconds, even if the thing you're snapshotting is several hundreds of terabytes of data.
[00:52] <Sachiru> Reversion takes the same amount of time.
[00:52] <conrmahr> so the config file says don't use SMART if you are using smartd
[00:52]  * RoyK messed up a VM rather badly last night, but then, it was on ZFS, so he just restored from the 15min old snapshot - zfs autosnap is neat
[00:53] <RoyK> conrmahr: huh?
[00:53] <RoyK> conrmahr: it's the same thing
[00:54] <conrmahr> # List of devices you want to explicitly enable S.M.A.R.T. for
[00:54] <conrmahr> # Not needed (and not recommended) if the device is monitored by smartd
[00:54] <RoyK> conrmahr: btw, if you have drives supporting SCTERC, turn that on if it's not enabled already
[00:54] <RoyK> conrmahr: no need to list any
[00:54] <conrmahr> ok
[00:54] <conrmahr> I have WD Red 4TB NAS (2x)
[00:54] <conrmahr> do they support SCTERC?
[00:55] <RoyK> conrmahr: can you pastebin smartctl -x /dev/sdb?
[00:56] <Sachiru> RoyK: Why the conservative auto-snapshot?
[00:56] <RoyK> Sachiru: conservative?
[00:56] <conrmahr> http://paste.ubuntu.com/16450748/
[00:56] <Sachiru> I have mine snapshot every 5 minutes. Then again, the host that does this handles only 6 VMs, and most of them are nginx+php+mariadb stacks.
[00:57] <RoyK> Sachiru: see SCT Error Recovery Control in there - set to 7 seconds
[00:57] <RoyK> should work fine
[00:57] <RoyK> I prefer 1s or so, but that's up to you
[00:57] <Sachiru> RoyK: I mean zfs auto-snapshot
[00:58] <Sachiru> Not SCT
[00:58] <Sachiru> For VMs
[00:58] <RoyK> conrmahr: sorry, that was for you
[00:58] <Sachiru> Ah
[00:58] <RoyK> Sachiru: I beleive 15 minutes is sufficient
[00:58] <conrmahr> so in the smartd.conf?
[00:59] <RoyK> not sure - but anyway - 7s is ok
[01:00] <RoyK> far better than without ERC
[01:00] <bindi> dont you want to disable that tler thingy?
[01:00] <RoyK> without ERC the disk can go into so-called deep recovery, meaning it'll spend a minute or two trying to recover a single sector
[01:01] <RoyK> bindi: ERC == TLER - and that's not a thing you want to disable
[01:02] <RoyK> without ERC, your raid, be it md or zfs, may kick out a drive for a single bad sector
[01:03] <conrmahr> RoyK: I think by default it's set to 7s
[01:03] <conrmahr> i did $smartctl -l scterc /dev/sdc
[01:03] <conrmahr>            Read:     70 (7.0 seconds)
[01:03] <conrmahr>           Write:     70 (7.0 seconds)
[01:03] <RoyK> 7 is default on sata disks
[01:04] <conrmahr> so I should change to 1s?
[01:04] <conrmahr> whats the advantages and disadvantages? This is a NAS/Media Server
[01:05] <RoyK> advantages are to avoid a 7s drop if a sector goes bad
[01:05] <RoyK> zfs can handle that
[01:06] <RoyK> disadvantages are (something Donald Trump said)
[01:07] <Sachiru> IMHO, it's good to set ERC even if you're not using ZFS
[01:07] <RoyK> conrmahr: smartctl -l scterc,10,10 /dev/something
[01:08] <Sachiru> I know mdadm (at the very least) also complains about dropped sectors.
[01:08] <RoyK> Sachiru: it's not about sectors
[01:08] <RoyK> Sachiru: it's about the disk trying to find out about those sectors and becoming unavailable for a long time
[01:08] <Sachiru> RoyK: I know, it's sectors causing drives to drop from the array
[01:09] <RoyK> and you don't really want a few dead sectors to make your md or zfs or whatever to kick it out
[01:10] <RoyK> that's why we have raid
[01:10] <conrmahr> Write SCT (Set) Error Recovery Control Command failed: scsi error badly formed scsi parameters
[01:10] <conrmahr> SCT (Set) Error Recovery Control command failed
[01:10] <conrmahr> Retry with: 'scterc,70,70' to enable ERC or 'scterc,0,0' to disable
[01:10] <conrmahr> Write SCT (Set) Error Recovery Control Command failed: scsi error badly formed scsi parameters
[01:10] <conrmahr> SCT (Set) Error Recovery Control command failed
[01:10] <conrmahr> Retry with: 'scterc,70,70' to enable ERC or 'scterc,0,0' to disable
[01:11] <RoyK> heh - crippled fucking disks
[01:11] <RoyK> I've stopped bying WD stuff
[01:11] <RoyK> but then, 7s should do
[01:11] <conrmahr> what the?
[01:11] <RoyK> WD cripples the firmware
[01:12] <RoyK> toshiba has good, cheap drives with good firmware, at least for now
[01:13] <RoyK> the 'enterprise' SATA disks from WD are just the same as the desktop drives, just with better firmware
[01:13] <RoyK> a few years ago, they were the same, more or less
[01:14] <conrmahr> how do i start smartd
[01:15] <RoyK> which ubuntu version_
[01:15] <RoyK> ?
[01:15] <conrmahr> trusty
[01:15] <RoyK> should be running
[01:15] <RoyK> service smartd start
[01:15] <conrmahr> you didn't change the config file?
[01:16] <conrmahr> how do i define my email for notifications?
[01:16] <RoyK> default config should work
[01:16] <RoyK> it sends email to root
[01:16] <RoyK> just forward root emails to yourself
[01:16] <conrmahr> ah right
[01:16] <RoyK> in /etc/aliases
[01:16] <RoyK> (and then run newaliases)
[01:18] <conrmahr> no such dir
[01:20] <RoyK> is there any mail in the queue?
[01:20] <RoyK> mailq should tell
[01:21] <RoyK> do you have postfix installed?
[01:22] <conrmahr> i don't have mailq
[01:22] <conrmahr> it tells me what package its in
[01:23] <RoyK> apt-get install postfix
[01:24] <conrmahr> in the gui how can i select ok?
[01:24] <RoyK> don't use a guo
[01:24] <RoyK> don't use a gui
[01:24] <conrmahr> i mean i use terminal
[01:25] <conrmahr> but it looks like a gui
[01:25] <conrmahr> Package configuration
[01:27] <conrmahr> nvm
[01:27] <conrmahr> it was TAB + Enter
[01:33] <conrmahr> thanks everyone
[01:33] <conrmahr> especially RoyK
[05:51] <House> does anyone have cifs automounting working on 16.04?
[05:53] <House> i can successfully `smbclient -gL //server.fqdn/` with a password at commandline, but as soon as I use "-k" like auto.smb uses it throws errors
[08:36] <jamespage> coreycb, ok poked neutron-vpnaas, builds OK now
[08:36] <jamespage> coreycb, nova - needs microversion-parse, heat - needs monascaclient
[08:36] <jamespage> new deps, not in archive...
[09:11] <jamespage> coreycb, switching to merge-mode=replace makes sense to me - updates made...
[09:11] <jamespage> coreycb, poking at liberty failures now
[09:12] <jamespage> coreycb, neutron/wily failure test failure looks genuine; glance liberty failures consistent across trusty and wily.
[09:13] <jamespage> coreycb, I've also shoved the dh-python update into the SRU queue for Xenial - that will unblock most xenial things and the trusty/mitaka failures...
[09:26] <jamespage> coreycb, reverting merge-mode for now - not supported on trusty
[10:58] <jamespage> coreycb, ddellav: do you think major version matching might be a good idea for charm-helpers?
[10:58] <jamespage> I really hate having to update for x.1's
[10:59] <jamespage> hmm although that won't work for 20XX.X versions...
[10:59] <jamespage> grak
[11:20] <jamespage> coreycb, wedged the dh-python SRU into the openstack-ubuntu-testing PPA's to unblock branch builders...
[11:36] <jamespage> coreycb, will need to sru a glance-store point release for liberty
[11:45] <coreycb> jamespage, ddellav, yes major version matching would be nice for charm-helpers
[11:51] <coreycb> jamespage, thanks for all the updates, I updated the spreadsheet to track some of these and making a card for glance-store.
[12:01] <bc2946088> Morning!  Does anyone have a page showing the steps to adding Ceph OSD's to an already deployed cluster with JUJU?  Is it as easy as adding the drives to the server and rescanning using ceph-osd charm?
[12:05] <xnox> smoser, hey i have questions about config-drive metadata, networking, and static networking
[12:06] <xnox> i have provided a valid /etc/network/interfaces.d/enc1000.cfg with static network configuration...
[12:06] <xnox> and cloud-init ended up writing dhcp auto for the enc1000 interface in the /etc/network/interfaces.d/50-cloud-init.cfg
[12:07] <xnox> providing my own 50-cloud-init.cfg in the config-drive did not do a thing - cloud-init would still overwrite it with dhcp auto
[12:07] <xnox> i guess the only thing that worked was to provide /etc/networking/interfaces full stop. but that is sad =(
[12:08] <xnox> for networking json.... i did not find enough keys in it to configure static network configuration as needed either.
[12:08] <xnox> so questions
[12:09] <xnox> smoser, is it possible to use interfaces.d in xenial and trump 50-cloud-init.cfg? should i be recommending to the cloud provider to ship a straight up /etc/networking/interfaces ?
[12:32] <jamespage> coreycb, pointer to spreadsheet? I can update myself...
[12:33] <coreycb> jamespage, sent it to you
[12:33] <coreycb> jamespage, thanks
[12:44] <jamespage> coreycb, ta
[12:54] <jamespage> coreycb, oh - sorry - I see you where already looking at nova
[12:54] <jamespage> coreycb, I pushed my changes to the git repository...
[12:55] <jamespage> my bad
[12:55] <jamespage> now I can see the list that's great!
[12:55] <jamespage> coreycb, the babel update may be ignorable for now
[12:55] <jamespage> coreycb, heat wanted it as well, but built and worked ok without it...
[12:58] <coreycb> jamespage, no problem :)  ack on babel
[13:01] <dunaeth> Hi, any idea for partitioning a single machine for openstack testing ? There's a doc for automated cloud install but it does not recommand anything
[13:40] <smoser> xnox, there are some issues with config drive providing networking configuration right now.
[13:40] <smoser> if you do not want cloud-init to set fallback networking (write 50-cloud-init.cfg) then you have to disable cloud-init networking entirely.
[13:40] <smoser> i do not expectt that to change, but want to fix the config drive networking scenario in short order.
[13:59] <jamespage> dunaeth, openstack deployed in LXD containers on a single machine?
[14:04] <jamespage> coreycb, where I need to rev a dependency to fix a daily builds issue, I've been backporting a version to the ppa under ~openstack-ubuntu-testing as well
[14:04] <jamespage> coreycb, glance-store being an example of that
[14:07] <coreycb> jamespage, I guess that would mostly only be the case for stable releases, since it'll take longer to get deps uploaded to the archive for them
[14:07] <jamespage> coreycb, yes
[14:07] <jamespage> I think that's OK
[14:07] <jamespage> coreycb, for dev, I've been uploading and then backporting straight away using backport_package - that places into <series>-staging and the trunk testing ppa's
[14:07] <coreycb> jamespage, it makes sense, for example I think glance-store dep probably should wait for the point release of glance before SRUing it
[14:08] <jamespage> coreycb, Iagreed
[14:08] <coreycb> jamespage, ok makes sense
[14:09] <jamespage> coreycb, I think the mitaka build failures should be OK now
[14:09] <jamespage> just waiting for the queue to clear :-)
[14:10] <coreycb> jamespage, awesome!
[14:26] <nabukadnezar43> hello ubuntu-server fails during installation with an error "modprobe -v usb-storage failed"
[14:26] <nabukadnezar43> are there any workarounds?
[14:27] <nabukadnezar43> i'm trying to install from a usb stick and i don't have a cd/dvd rom drive
[14:30] <TJ-> nabukadnezar43: check the 'dmesg' output for clues as to why/if the module fails to load
[14:31] <nabukadnezar43> TJ-: ok let me try
[14:38] <basilAB> I am looking to test latest Mitaka release available in latest openstack neutron tag ( https://github.com/openstack/neutron/releases/tag/8.1.0 ) on Ubuntu . But the updates are not yet available in Ubuntu Cloud archive as packages. Does anyone knows, how often OpenStack updates added to Cloud Archive?
[14:43] <jamespage> hey basilAB - I know :-)
[14:44] <jamespage> basilAB, apologies for not responding to your email - I was just thinking about doing that
[14:44] <basilAB> ah! you are here.
[14:44] <basilAB> great :-) thank you!
[14:44] <jamespage> basilAB, typically we sweep up any avaliable stable release in the first two weeks of the month, with the aim of getting them out into -updates by the end of the month
[14:45] <jamespage> basilAB, you'll be interested in the tracking bug - https://bugs.launchpad.net/cloud-archive/+bug/1580674
[14:45] <jamespage> basilAB, work should progress this week...
[14:46] <basilAB> subscribed now and thanks for the schedule details. I will keep an eye.
[14:46] <jamespage> although we are a little blocked by a related dh-python issue - trying to get that clear first...
[14:47] <nabukadnezar43> apperantly "usb_storage" module needs to be signed
[14:47] <EmilienM> jamespage: do you have LP for newton too?
[14:48] <basilAB> jamespage: since you are here, heard or any plans on adding 'octavia' lbaas addition to cloud-archive?
[14:48] <jamespage> EmilienM, we don't generally bug track development releases...
[14:48] <jamespage> basilAB, not in the short term no
[14:48] <basilAB> okay
[14:48] <nabukadnezar43> how do i sign the usb_storage module for secure boot?
[14:49] <jamespage> EmilienM, if you want to sniff current master branches for newton for Xenial and Yakkety - https://launchpad.net/~openstack-ubuntu-testing/+archive/ubuntu/newton
[14:49] <jamespage> that is the state of currently built master branches - its not complete - working some new dependencies...
[14:50] <EmilienM> jamespage: nice! please ping me when you feel like I can start testing it (asap)
[14:51] <jamespage> EmilienM, I'd not want you to put that into a voting gate btw...
[14:51] <EmilienM> jamespage: ok
[14:57] <jamespage> EmilienM, the gate for that PPA is 'it builds and passes its unit tests' ...
[14:57] <EmilienM> ok
[14:58] <EmilienM> jamespage: as soon as all packages are there, please ping me, I'll start testing it and report you feedback.
[14:58] <jamespage> EmilienM, that would be nice - thankyou!
[14:58] <EmilienM> cool
[14:59] <jamespage> coreycb, hmm - niggle with adding a newer dh-python to the sbuild environment
[14:59] <jamespage> working on that now...
[14:59] <jamespage> it gets installed by s-p-c so we can add the PPA, which contains the newer version...
[14:59] <jamespage> grrr
[15:00] <coreycb> jamespage, s-p-c?
[15:01] <nabukadnezar43> TJ-: dmesg output didn't show anything relevant but i tried probing usb-storage module manually. Got a "could not insert usb_storage required key not available"
[15:01] <nabukadnezar43> error
[15:02] <TJ-> nabukadnezar43: that sounds rather like a secure-boot issue; well checking the module signing key anyhow
[15:02] <jamespage> software-properties-common
[15:03] <TJ-> nabukadnezar43: the modules shipped with the distro should all be signed
[15:04] <nabukadnezar43> TJ-: yeah that's weird
[15:04] <TJ-> nabukadnezar43: which ubuntu release are you working with? 16.04 ?
[15:04] <nabukadnezar43> 16.04 server amd64
[15:04] <TJ-> nabukadnezar43: using the -generic kernel, or -lowlatency?
[15:05] <coreycb> jamespage, are you familiar with the get_component_config() error that several of the newton packages are hitting?
[15:05] <TJ-> nabukadnezar43: it defaults to -generic but its always worth checking, I've noticed some differences with the -lowlatency as regards signing, though I forget what I did notice right now :)
[15:05] <nabukadnezar43> TJ-: i have no idea, i haven't changed anything
[15:05] <coreycb> jamespage, I'm looking at keystone and cinder for newton btw
[15:12] <TJ-> nabukadnezar43: I can't see any obvious bug reports with a similar problem. But, as its the installer I'm wodering if the installer has kernel version A, and during the chroot installation of the latest kernel version B it tries to modprobe the version B usb_storage module, which would upset version A kernel I think (because the kernel uses different signing keys per build if I recall correctly)
[16:16] <brelod> hey guys
[16:17] <brelod> do you know some good book / other source to learn linux server administration?
[16:31] <genii> !guide
[16:56] <brelod> thx
[17:37] <xnox> smoser, how does one completely disable cloud-init networking? i'm pondering if I should try that out.
[17:37] <xnox> at least for this special usecase.
[17:37] <smoser> well, i'm pretty sure we do want to support your  use case. at least i think
[17:37] <smoser> but for disabling:
[17:38] <smoser> echo "network: {config: disabled}" | tee /etc/cloud/cloud.cfg.d/99-xnox-hates-networking.conf
[17:39] <smoser> echo "network: {config: disabled}" | tee /etc/cloud/cloud.cfg.d/99-xnox-hates-networking.cfg
[17:39] <smoser> (.cfg, not .conf)
[17:42] <xnox> smoser, right. or i should have a proper networking_json in my config_drive for static network configuration....
[17:56] <John[Lisbeth]> I need to connect my ubuntu server to the internet via command line and/or sideloading software via usb
[18:08] <coreycb> jamespage, I'm going to exclude install  of keystone in-tree tempest tests (dh_install --fail-missing --exclude keystone_tempest_plugin).  let me know if you disagree.  I think with tempest not packaged and not in main this makes sense.
[18:23] <John[Lisbeth]> ok I got internet working but now I need to block the automatic updating so I can free up apt to use for myself
[18:25] <John[Lisbeth]> nevermind everything is solved now thank you
[19:24] <devster31> does anyone know if and how I can regenerate cups certificates to access the web-ui?
[20:16] <EmilienM> coreycb: hey, did you update something in sahara / mitaka?
[20:17] <coreycb> EmilienM, there have been no updates since the final release
[20:17] <EmilienM> coreycb: ack thx
[21:49] <newbsie> How can I delete all existing logs on Ubuntu 16.04? "journalctl --vacuum-time=1seconds" did not work....
[22:15] <synchronet_> what logs exactly?
[22:16] <newbsie> synchronet_: it's a webserver with nginx/gunicorn/postgres so I want to purge all logs
[22:16] <newbsie> synchronet_: it's not a production/running server, just more an image I'm working on right now.
[22:20] <synchronet_> do it manually I guess
[22:21] <synchronet_> cd /var/log and go wild with the del command :)
[22:21] <synchronet_> there are more sever commands :)
[22:21] <synchronet_> servere
[22:21] <synchronet_> you have root yeah?
[22:23] <newbsie> synchronet_: yes, I'm root
[22:24] <newbsie> synchronet_: I can't use the journalctl command to do this?
[22:25] <synchronet_> why the logs bothering you?
[22:26] <newbsie> synchronet_: I wanted to start it fresh after doing a bunch of test while setting up.
[22:26] <newbsie> I understand it gets deleted or rotated out, but still... no need to clutter it with bunch of python errors/etc
[22:27] <synchronet_> not familiar with nginx but with apache you can just delete the VS etc
[22:27] <synchronet_> all logs go
[22:28] <synchronet_> not sure exactly what your doing tho
[22:29] <synchronet_> not sure the facination with nginx, its shareware
[22:30] <synchronet_> Virtualmin and Apache always worked for me :)
[22:30] <synchronet_> for website things
[22:31] <synchronet_> and with php7 now its fast enough
[22:32] <newbsie> synchronet_: I use nginx, mostly because it is used often in python/gunicorn projects and docs are easy for me to work with.
[22:32] <synchronet_> with a nice server
[22:32] <synchronet_> ok
[22:32] <synchronet_> area not familiar with
[22:32] <newbsie> synchronet_: typically, easy wins for me... I don't need advanced super functionality. :)
[22:33] <synchronet_> :) me niether
[22:33] <synchronet_> intersted in a quiet life these days, not so easy there days
[22:33] <synchronet_> things move so fast
[22:34] <synchronet_> upgraded to php7 and WP plugins hate it
[22:34] <synchronet_> something about php7 and symboles
[22:34] <synchronet_> easy fix tho
[22:35] <newbsie> Yeah, I really don't enjoy server management/IT so it is just a necessity to do what I do, but I prefer development.
[22:35] <synchronet_> I prefer to win the lotto and have done with it ;)
[22:35] <newbsie> I prefer to win the lottery too, that way I can just do what I enjoy.
[22:36] <newbsie> That said, I have better chance of making money in the stock market than winning the lottery....
[22:36] <synchronet_> employ some clever dude then :)
[22:36] <jak2000> hi all how to use QUOTA on home dirs? 40gb for user1 and 60gb for user2? how to do?
[22:37] <synchronet_> in Virtualmin its a piece of cake
[22:42] <patdk-lap> wekk, go get a baker than
[22:46] <jak2000> synchronet_ any advice for me?
[22:46] <synchronet_> not froma command point of view no
[22:46] <synchronet_> hang around
[22:47] <jak2000> Virtualmin  is for me?
[22:47] <synchronet_> maybe
[22:47] <jak2000> not
[22:47] <jak2000>  Open Source Web Hosting and Cloud Control Panels
[22:47] <jak2000> :(
[22:48] <jak2000> how to configure the quotas for homedirs?
[22:48] <sarnold> jak2000: http://manpages.ubuntu.com/manpages/xenial/man8/setquota.8.html
[22:48] <synchronet_> can set quotas etc very easily but go to the forum first and ask if its what you need
[22:49] <synchronet_> sarnold: great
[22:49] <synchronet_> seems to be people help only after others try :)
[22:49] <synchronet_> irc sucks at times its best to know everything
[22:49] <sarnold> nah, it just took longer than usual for me tofind the manpage in question :) someone I suspect jak2000 didn't need the kernel interface, which was the first thing that came to mind :)
[22:49] <synchronet_> :)
[22:50] <synchronet_> too much to learn and too short a life
[22:52] <jak2000> thanks
[22:52] <synchronet_> :) fixed then?