[04:51] <cpaelzer> good morning
[05:04] <admcleod> i notice the installer doesnt prompt for tz/locale, what is the rationale here?
[05:04] <admcleod> cpaelzer: hi! :)
[05:28] <cpaelzer> admcleod: wihch installer
[05:28] <cpaelzer> subiquity?
[06:09] <lordievader> Good morning
[06:10] <cpaelzer> hi lordievader
[06:10] <cpaelzer> how are you today
[06:11] <lordievader> Hey cpaelzer
[06:11] <lordievader> Doing good here, how are you?
[06:16] <cpaelzer> lordievader: fine as well
[06:16] <cpaelzer> a bit rainy, but no floods here at least
[06:31] <Assid> hi there
[06:31] <Assid> so im considering a setup with a raid 1  using 1 samsung 850 pro and 1 wd green (mlc+tlc)
[06:32] <Assid> i was just asking if theres an issue with using different drives  altogether
[06:39] <Assid> hello?
[06:39] <lotuspsychje> !patience | Assid
[06:39] <lotuspsychje> Assid: the ubuntu channels run worldwide on timezones, not everyone is awake at same time
[06:40] <Assid> yeah im aware mate..
[06:40] <Assid> lotuspsychje: any idea if this looks like a drive thats failing? https://pastebin.com/CDL05Ark
[06:41] <cpaelzer> Assid: yep raw-read and seek should usually not fail non-rarely
[06:41] <lotuspsychje> Assid: smart test passed, perhaps also take a look in your syslog for IO issues
[06:41] <cpaelzer> unless you had some crazy special test load (unlikely)
[06:42] <Assid> cpaelzer: thats what i was thinking.. it shouldnt be that much.. but the other values dont look like it..
[06:42] <Assid> and the smart tests pass fine
[06:42] <Assid> i need to do a long test perhaps tonight after work..
[06:43] <cpaelzer> OTOH
[06:43] <cpaelzer> it seems that seagate encodes values more special than the tool might know
[06:43] <cpaelzer> http://www.users.on.net/~fzabkar/HDD/Seagate_SER_RRER_HEC.html
[06:43] <cpaelzer> so take it with a grain of salt
[06:44] <cpaelzer> and maybe take a calculus to check the values :-)
[06:44] <Assid> uhh.. crazy idiots..
[06:45] <Assid> if thewy wanna do that.. why not provide a patch to smartmontools to be able to READ it back to a standardised format
[06:46] <cpaelzer> https://lime-technology.com/forums/topic/31038-solved-seagate-with-huge-seek-error-rate-rma/
[06:46] <cpaelzer> similar discussion
[06:46] <cpaelzer> Assid: I assume there is a list of per device quirks already
[06:46] <cpaelzer> just nobody yet cared to create one for this device
[06:46] <cpaelzer> unfortunately manufacturers still love to hold back specs sometmies
[06:47] <Assid> so this device somehow lags slightly everytime i run a command..  while i get its a 4th gen i5, it really shouldnt behave the way it is
[06:50] <cpaelzer> ltrace -S and perf are your friends
[06:50] <cpaelzer> this could be so many sources
[06:50] <Assid> yeah i;ll check that
[06:51] <Assid> so what about the raid 1 situation ..  using 1 samsung 850 pro and 1 wd green (mlc+tlc)  ; would there be an issue with using different drives  altogether
[06:51] <Assid> man.. im worse than a person with ADD..
[06:51] <cpaelzer> honestly that is a LMGTFY question
[06:52] <cpaelzer> TL;DR not recommended but working, has special implications you usually do not want
[06:54] <Assid> while most guides say you should use similar drives or similar spec drives.; i can understand the performance of the slower  drive would be the final performance of the system
[06:55] <lotuspsychje> Assid: see also the ##hardware channel for hardware questions, like combining ssd's in raid
[07:08] <lordievader> Assid: Depends a little bit on how it is implemented, but if the write wait until everything is stored on both drives the benefit of an ssd will dissapear.
[07:11] <Assid> yes but it will be as quick as the slowest drive in the array.. which would still be fine; since thats an SSD
[07:12] <lordievader> Err, the WD green you mention is a regular hard drive right? Or am I misunderstanding the setup you have in mind?
[07:12] <lordievader> But yes, it will be as quick as the slowest dive in the array.
[07:19] <cpaelzer> lordievader: Assid: I think writes will be as slow as the slowest
[07:19] <cpaelzer> reads will be more interesting
[07:19] <cpaelzer> as they might end up being served round robin
[07:20] <cpaelzer> being slow and fast one (or whatever i t switches) at a time
[07:20] <Assid> WD Green SSD ..
[07:20] <cpaelzer> Oh there is a green ssd now
[07:20] <Assid> i wouldnt put an SSD with an HDD
[07:20] <cpaelzer> well then it is probably ok
[07:20] <cpaelzer> if characteristics don't differ too much
[07:20] <Assid> yes thats  why i mentioned MLC+TLC
[07:20] <Assid> i want the benefits of RAID without needing to spend too much on MLC drives
[07:21] <cpaelzer> in general it is prefered to use different drives anyway, to lower the chance of breaking at the same time
[07:21] <cpaelzer> at least not from same production batch
[07:21] <Assid> the TLC just helps incase of hw failure on the drive controller of the samsung..
[07:31] <Assid> i also have a VM which has the snapshot of the database every 15 minutes.. incase of total hw failure
[07:31] <Assid> once i learn how to use pg  replication , i'd probably use that instead of snapshots
[09:25] <jamespage> coreycb: I'm going to start a run of dep refereshes for b2
[09:44] <jamespage> coreycb: merging with debian as I go
[09:44] <jamespage> coreycb: oslo.config to start with
[10:53] <jamespage> coreycb: ok config, utils, log, i18n and oslotest done and uploaded to cosmic
[11:08] <jamespage> doing context now
[11:09] <rawi> Hello folks. Xenial Server with encrypted root partition. After the kernel update to 4.4.0-128 the server goes into endless boot loop. Booting the old 4.4.0-127 is OK. Somebody here, who experienced it to?
[11:19] <jamespage> context done moving to cache
[12:08] <coreycb> jamespage: cool, i'm starting on clients now
[12:14] <coreycb> jamespage: working on glanceclient and heatclient
[12:22] <ahasenack> hi, does anobody know if inotify can catch changes to a symlink's target?
[12:23] <ahasenack> like I watch resolv (which is a symlink pointing at resolv1)
[12:23] <ahasenack> then ln -sf resolv2 resolv (have resolv -> resolv2)
[12:23] <ahasenack> but inotifywait didn't catch that
[12:23] <ahasenack> https://pastebin.ubuntu.com/p/TSHx3pR8pW/
[12:27] <ahasenack> hm, there are more operations happening there: https://pastebin.ubuntu.com/p/wKpMVY4dVz/
[12:46] <coreycb> jamespage: looks like glanceclient needs a new keystoneauth so going to do that
[13:10] <coreycb> jamespage: keystoneauth and heatclient are done
[13:14] <coreycb> jamespage: working on keystoneclient and keystonemiddleware
[13:40] <jamespage> coreycb: .cache done moving on
[13:41] <coreycb> jamespage: ok. glanceclient is done.
[13:45] <jamespage> coreycb: oh I see you already did .concurrency - lemme check for a rev
[13:45] <jamespage> gbp:info: package is up to date, nothing to do.
[13:45] <jamespage> nice
[13:47] <coreycb> jamespage: ok that was probably during b1. keystonemiddleware is done.
[14:09] <coreycb> jamespage: keystoneclient is done. moving on to python-neutronclient and python-neutron-lib.
[14:19] <jamespage> coreycb: ack
[14:44] <jamespage> coreycb: oslo.db underway
[14:44] <jamespage> coreycb: ovsdbapp needed an update btw
[14:46] <coreycb> jamespage: ack. neutronclient and neutron-lib are done. moving on to novaclient and openstackclient.
[15:39]  * lopta downloads Ubuntu Server 16.04.4 for i386
[15:45]  * compdoc passes the hat for donations so lopta can buy a real computer
[15:48] <lopta> compuguy: This is a test rig that I use for things and stuff.  I have another test rig for amd64.
[15:55] <coreycb> jamespage: novaclient and openstackclient are done.  working on os-brick and os-vif.
[15:57] <jamespage> coreycb: ok I've got this far - https://launchpad.net/~ci-train-ppa-service/+archive/ubuntu/3285/+packages
[15:58] <jamespage> coreycb: will pickup again tomorrow as have a few calls todo before I EOD
[15:58] <coreycb> jamespage: great and thanks!
[15:58] <jamespage> coreycb: those are all merge/update for rocky
[15:58] <jamespage> coreycb: I took the re-align with Debian where possible decicion paths
[15:58] <jamespage> but kept to the upstream release tarball for sources
[16:10] <coreycb> jamespage: yep makes sense
[18:51] <powersj> nacc: rbasak: added pulling the built snap as an artifact for the nightly and ci jobs example: https://jenkins.ubuntu.com/server/job/git-ubuntu-ci-nightly/28/
[19:23] <l4m8d4> Hello there, I am trying to set up a server in the following way: There are 2 system disks, on each disk should reside a big LUKS container, which contains one mirror of a btrfs RAID11 configuration. Now, I used to install on BIOS systems in the past, where I would then install grub to the beginning of both disks, which worked fine. Now with the new EFI system I have here, grub has to reside in the EFI
[19:23] <l4m8d4> partition right? So how do I set up an ESP on both drives so they are both bootable in case of a hardware failure?
[19:26] <l4m8d4> The server installer only lets me select one boot disk, where it creates the ESP
[20:16] <rbasak> powersj: thanks!
[20:53] <nacc> powersj: excellent, tyvm
[20:53] <nacc> powersj: will that lead to eventual space issues/how will it be pruned?
[20:54] <powersj> it should (heh) be fine as the jenkins runs are limited to last 25 runs
[20:55] <powersj> that is an extra 5Gb of space and I'm ok with that
[20:55] <powersj> 2 jobs * 100mb snap * 25 runs
[20:55] <powersj> if it gets bad I can lower the number
[20:57] <nacc> powersj: ack, thanks for thinking about it :)
[21:06] <l4m8d4> So, nobody here has a clue regarding multiple ESP?
[21:25] <rbasak> nacc: so my plan for now is to upload to edge manually using the artifact after checking the hash built matches origin/master. Sound OK?
[21:25] <rbasak> (as a process, every time)
[21:26] <rbasak> Though I noticed that particular build failed CI due to the bug we haven't been able to reproduce previously
[21:27] <rbasak> If that keeps happening, I could try the edge snapcraft snap instead, which would mean adjusting the CI a little.
[21:31] <nacc> rbasak: yeah i think that's reasonable
[21:31] <nacc> rbasak: i mean the goal is to fix this bug, right? :)
[21:39] <rbasak> Yeah but it's blocked on snapcraft being deterministic and then us being able to give Kyle a reproducer.
[21:51] <nacc> right
[21:51] <nacc> so for now, edge will just be manually updated by you as we land things in master?
[21:51] <nacc> and we'll be leaving beta/stable alone until the bug is fixed?
[21:57] <rbasak> I thought I could promote the edge snap directly to beta/stable in the store.
[21:57] <rbasak> When it's known good. Rather than using the git branches.
[22:06] <nacc> rbasak: ah then yeah you can do that
[22:06] <nacc> rbasak: are you going to delete the git branch then?
[22:07] <nacc> or would we eventually move back to that?
[22:14] <runelind_q> I'm trying to set static DNS servers (ipv6) in an ubuntu1804 container.  It seems like changes in 50-cloud-init.yaml don't persist, should I create a new file with just the DNS servers?
[22:14] <runelind_q> or what's the best way to go about it?
[22:18] <nacc> "Changes to it will not persist across an instance."
[22:18] <nacc> runelind_q: that's for getting networking info from the data source
[22:19] <nacc> runelind_q: you might want #cloud-init as well
[22:19] <nacc> or #netplan