[11:16] <sj_dk> join #cloudinit
[11:16] <sj_dk> sorry
[13:14] <ananke> this just popped up on showhn: https://news.ycombinator.com/item?id=23310063
[13:34] <ijohnson> hi folks, is there a way to ask cloud-init what datasource was used? I see that with`cloud-init status --long` there is a "details" section with this information, but is that the right way to do it and is that in a format that can be depended on ?
[14:06] <Odd_Bloke> ijohnson: For programmatic use, I wouldn't depend on it.  It reads from /run/cloud-init/result.json, so that would be a better place to look.
[14:06] <ijohnson> Odd_Bloke: so is /run/cloud-init/result.json a stable format that I can rely on for just reading the datasource ?
[14:08] <Odd_Bloke> ijohnson: Yep, there's a v1 key which we'll keep stable.
[14:08] <Odd_Bloke> ijohnson: I'd be interested to know a little more about your use case, too, for future reference.
[14:11] <smoser> is this known?
[14:11] <smoser>  http://paste.ubuntu.com/p/mDk8TNvgSc/
[14:12] <smoser> launched a fresh azure instance of 18.04. the clock jumps during boot.
[14:12] <smoser> 2020-05-17 17:02:58,229 - netlink.py[DEBUG]: Wait for media disconnect and reconnect to happen
[14:12] <smoser> 2020-05-26 13:52:56,326 - netlink.py[DEBUG]: netlink socket ready for read
[14:13] <Odd_Bloke> Wow.
[14:13] <smoser> that seems a little "broken" to me. Why wouldn't you set your vm bios to the correct time to start?
[14:14] <smoser> but ... ok, ntp does the rigth thing and fixes it.
[14:14] <smoser> but what seems *more* broken is that uptime seems unaware of the jump
[14:14] <smoser>  uptime
[14:14] <smoser>  14:13:00 up 8 days, 21:10,  3 users,  load average: 0.00, 0.11, 0.15
[14:14] <smoser> i guess its possible that this was a "pre-provisioning"
[14:15] <smoser> but i would not have expected that... I suspect on the order of 5 ubuntu vms have been launched on thsi account in a year
[14:17] <Odd_Bloke> smoser: `journalctl -o short-monotonic` might give you a separate indication of how long the machine has actually been up?
[14:17] <smoser> yeah. just ran that
[14:17] <smoser> it shows the hump
[14:17] <smoser> pasting
[14:17] <smoser> http://paste.ubuntu.com/p/66SRvpJY3k/
[14:18] <smoser> so the jump occurs right around the dcp. so i'd certainly think its ntp related.
[14:23] <Odd_Bloke> smoser: Does `journalctl -u chrony.service` shed any light?
[14:24] <smoser> -- Logs begin at Sun 2020-05-17 17:02:46 UTC, end at Tue 2020-05-26 14:23:17 UTC. --
[14:24] <smoser> -- No entries --
[14:24] <smoser> so.. no
[14:30] <Odd_Bloke> smoser: Oh, I was looking at GCE where I think they requested chrony, so perhaps: `journalctl -u systemd-timesyncd.service`?
[14:32] <smoser> /
[14:32] <smoser> https://paste.ubuntu.com/p/fdMpW8BXNy/
[14:33] <smoser> am i just wrong? is uptime supposed to jump like that?
[14:34] <Odd_Bloke> I'm not really sure TBH.
[14:35] <Odd_Bloke> The fact the monotonic timestamp also jumps has me wondering if it's actually just the NTP clock setting that's going on.
[14:39] <smoser> https://bugs.launchpad.net/ubuntu/+source/cloud-init/+bug/1880704
[14:54] <Odd_Bloke> AnhVoMSFT: Would you be able to take a look at ^ and let us know what you think, please?
[14:58] <cpaelzer> smoser: some time dameons do an initial time warp if the delta is above some threshhold
[14:58] <cpaelzer> not sure what timesyncd does
[14:58] <cpaelzer> is the final time after the sync the real time that makes sense?
[14:59] <smoser> cpaelzer: right. that didn't surprise me.
[14:59] <smoser> but i dont think the kernel clock is supposed to jump like that
[14:59] <smoser> is'nt that what "monotonic" means ?
[15:00]  * smoser realizes monotonic might *only* mean not-ever-decreasing
[15:01] <smoser> well, per metacpan "A clock source that only increments and never jumps"
[15:01] <smoser> which is what I thought too
[15:01] <smoser> and since a perl documentation site agrees with me, i'm clearly right.  or should i just be embarrased.
[15:03] <cpaelzer> exactly - monotonic is split in two way
[15:03] <cpaelzer> never decrease
[15:05] <Hani> Hi All , we are trying to use cloud-init to initialize our images , but mostly our images have multiple disks (due to xenserver server limitation on max single disk size) and we need to have lvm that span multiple disks to show as 1 disk for end users, I see that could-init  plan t support mdadm raid in the future , so my question do you know if
[15:05] <Hani> there is any ETA for that ? and if yes do think the user case of having raid1 just to have all disks shown as 1 will be supported ?
[15:11] <smoser> cpaelzer: did you miss "never jumps" above?
[15:12] <rharper> Hani: hey,  advanced storage config is definitely on the roadmap, but it's not coming soon;    in your use-case, are you configuring the root disk or some other disk for end users?
[15:14] <Hani> rharper most users prefer to have all the storage directly in root , so it should appear as 1 logical volume for the client
[15:15] <cpaelzer> smoser: no I didn't miss it, it just never goes down
[15:15] <cpaelzer> and strictly speaking there also are "strictly monotonic" clocks
[15:15] <Hani> the max limit for vdi in xenserver is 2TB and we have some vms with 20TB storage , which will appear as 10x 2TB disks attached to the vm
[15:15] <cpaelzer> smoser: which also guarantee to never report the same value twice
[15:16] <cpaelzer> which can be hard if you need to hold that guarantee across all CPUs
[15:16] <cpaelzer> not sure if Linux provides a strictly monotonic clock to userspace
[15:16] <cpaelzer> some HW does provide one to the kernel
[15:17] <smoser> cpaelzer: i'm still confused.
[15:18] <smoser> google definitely says that a monotonic clock should not jump.
[15:25] <smoser> there is clearly confusion on this. beyond just my head.
[15:30] <Odd_Bloke> rharper: https://github.com/canonical/cloud-init/pull/391/
[15:30] <Odd_Bloke> meena: ^ is the proper location for that PR I pinged you with y'day.
[15:34] <rharper> Hani: assembling a root volume during boot isn't quite the storage plan for cloud-init; cloud-init does not run in the initramfs where one might be able to adjust a rootfs.   One can construct a two-stage approach here;  using cloud-init and curtin;   (https://curtin.readthedocs.io/en/latest/);  in curtin our testing harness does exactly this (boot an ephemeral environment with a configuration to assemble a rootfs from various disks);
[15:36] <rharper> Hani: cloud-init's initial storage scope is targetting non-root volumes;
[15:42] <Hani> rharper that what i was looking for , will check curtin Thanks
[15:51] <rharper> Hani: https://curtin.readthedocs.io/en/latest/topics/integration-testing.html  and https://git.launchpad.net/curtin/tree/tests/vmtests   #curtin if you want to discuss further
[16:03] <smoser> hey. i'm looking at http://paste.ubuntu.com/p/SY9n2SC6cW/ (66-azure-ephemeral.rules)
[16:03] <smoser> where does ATTRS{device_id} come from ?
[16:04] <rharper> smoser: I believe blkid/udev export those values
[16:04] <smoser> sda on that system gets named 'fabric_root'.
[16:04] <smoser>  https://paste.ubuntu.com/p/4QRHZTzCdt/
[16:04] <smoser> that is udevadm info --quer=all and its not there.
[16:05] <smoser> nor is it in /run/udev/data/b8:0
[16:07] <rharper> I don't have an azure instance up just yet, but their scsi device exports those (or is supposed to export those values)
[16:07] <smoser> rharper: smoser@51.143.89.81 if you want to look
[16:07] <rharper> k
[16:10] <smoser> there it is in find /sys | grep f8b3781
[16:10] <rharper> what rule file ?
[16:11] <rharper> ah, yeah, VMBUS
[16:12] <smoser> there are actually 2 rules files. one cloud-init, one walinux-agent
[16:12] <smoser>  lib/udev/rules.d/66-azure-ephemeral.rules /lib/udev/rules.d/66-azure-storage.rules
[16:12] <rharper> $attr{file}, %s{file}
[16:12] <rharper>        The value of a sysfs attribute found at the device, where all keys of
[16:12] <rharper> that's what I was looking for
[16:12] <rharper> one can directly read sysfs attrs
[16:12] <rharper> in udev rules
[16:13] <rharper> smoser: ok, I'm off, thanks
[16:13] <smoser> annoying that those sysfs attrs aren't part of udev export though.
[16:14] <rharper> I guess it's somewhat duplicate if you know the device path, which I think is knowable
[16:20] <rharper>  /sys/bus/scsi/devices   and /sys/class/block/sda -> ../../devices/LNXSYSTM:00/LNXSYBUS:00/PNP0A03:00/device:07/VMBUS:01/00000000-0000-8899-0000-000000000000/host0/target0:0:0/0:0:0:0/block/sda
[17:55] <smoser> yeah... obvious.
[17:58] <meena> mhm, yeah. very.
[18:09] <blackboxsw> Odd_Bloke: rharper: uss-tableflip functional change to add a new fix-daily-recipe script https://github.com/canonical/uss-tableflip/pull/50
[18:09] <blackboxsw> I want to add docs around this release process. I'm adding notes there on testing:
[18:12] <rharper> blackboxsw: cool
[18:33] <blackboxsw> rharper: push testing instructions
[18:33] <blackboxsw> pushed rather
[18:34] <blackboxsw> I can fix daily focal builds with this and then we can iterate on the README docs (or I can add it to this PR)
[18:34] <blackboxsw> I had wanted to get the 'fix' in to ensure daily build recipes are fixed for focal before a new-upstream-snapshot today
[18:34]  * blackboxsw adds doc changes now. as it shouldn't be too much
[18:37] <rharper> yeah
[18:37]  * rharper will test 
[18:40] <rharper> blackboxsw: https://paste.ubuntu.com/p/fnV3Jv9rTr/
[18:41] <blackboxsw> rharper: sorry that should have been fix-daily-branch -s origin/ubuntu/focal -d ubuntu/daily/focal
[18:41] <blackboxsw> missing origin
[18:42] <rharper> ah, ok
[18:42] <rharper> trying again
[18:42] <blackboxsw> but we should sort that and provide an upstream
[18:42] <rharper> +1
[18:42] <rharper> that worked
[18:43] <rharper> continueing with instructions
[18:43] <blackboxsw> remote param could probably be used for both source_branch and the daily_branch params if provided
[18:43] <blackboxsw> ok can tweak it to add the $remote if not already the prefix of source_branch or daily_branch
[18:44] <rharper> https://paste.ubuntu.com/p/MpZnDnBHJD/
[18:44] <rharper> does that look right? (it looks right to me)
[18:44] <blackboxsw> schweet!
[18:44] <blackboxsw> yeah
[18:44] <blackboxsw> that should fix daily focal at the moment.
[18:44] <blackboxsw> I'm scrubbing docs
[18:44] <rharper> \o/
[18:45] <rharper> and Odd_Bloke is +1 on us fixing focal "manually"  and we'll build/revise documents/tools ?
[18:49] <blackboxsw> I think Odd_Bloke wanted us to review docs first and fix script later. they are both small so I'll push the doc changes to this branch so we can review them.
[18:49] <blackboxsw> then we can land them as one
[18:49] <blackboxsw> almost done w/ docs. probably 10 mins
[18:53] <blackboxsw> rharper: docs pushed
[18:54] <blackboxsw> rharper: one thing we could to is make sure scripts/cherry-pick emits the message (now run `fix-daily-branch -s ubuntu/<release> -d ubuntu/daily/<release>`)
[18:54] <blackboxsw> so that the scripts document next step (just like new-upstream-snapshot does)
[18:54] <rharper> that sounds nice
[19:37] <Odd_Bloke> blackboxsw: Reviewed that PR; I'm +1 on using it locally in it's current state to get focal fixed.
[19:58] <blackboxsw> thanks Odd_Bloke !
[21:25] <blackboxsw> Odd_Bloke: thanks, late , but I've pushed the changes to require <release> and <remote> positional parameters
[21:25] <blackboxsw> good thoughts
[21:30] <Odd_Bloke> blackboxsw: Nice, thanks!
[21:46] <blackboxsw> Thanks Odd_Bloke got it.
[22:46] <blackboxsw> rharper: I resolved Odd_Bloke's review comments on https://github.com/canonical/uss-tableflip/pull/50 if you give it a final thumbs up I can fix ubuntu/focal daily builds
[22:47] <rharper> blackboxsw: yeah
[22:57] <blackboxsw> thanks rharper reverted that line in the docs
[22:59] <rharper> ok
[23:00] <rharper> blackboxsw: where does gitcmd come from ? I don't see a source wrapper or anything
[23:00] <rharper> just curious
[23:00] <rharper> nm
[23:00] <rharper> right in my face
[23:00] <rharper> blackboxsw: ok, +1
[23:02] <blackboxsw> heh rharper right gitcmd was your suggestion for me on a previous PR, to catch failures :)
[23:02] <blackboxsw> I figured I'd apply it here too
[23:02] <blackboxsw> and thx
[23:02]  * blackboxsw fixes ubuntu/daily/focal now
[23:04] <blackboxsw> ok daily focal recipe trying to build now. https://code.launchpad.net/~cloud-init-dev/+archive/ubuntu/daily/+recipebuild/2574992
[23:13] <blackboxsw> d'oh forgot to sync github -> LP
[23:19] <blackboxsw> ok daily build recipe  working on focal now
[23:27] <blackboxsw> and daily build recipe for groovy was broken, it was building packages for focal ~20.04 instead of groovy 20.10 so there is a conflict on upload.