[05:06] <grassvalley> 大家好，这聊天室如何使用
[05:07] <grassvalley> #k
[08:11] <lordievader> Good morning.
[12:22] <ws2k3> when i run dmesg on my server it says Segmentation fault (core dumped) my /var/log/syslog says this line: [24626248.500463] dmesg[3879] general protection ip:401813 sp:7fffa010c960 error:0 in dmesg[400000+5000] what can this be ?
[13:05] <rbasak> utlemmin`, Odd_Bloke: do you look after the Docker image that ends up being fetched as "ubuntu:trusty" by users?
[13:05] <rbasak> See bug 1505164
[13:21] <utlemming_sprint> rbasak: looking
[13:22] <utlemming_sprint> rbasak: this looks concerning
[13:24] <rbasak> utlemming_sprint: I'm pretty baffled as to how that could happen by accident.
[13:25] <utlemming_sprint> rbasak: well, the bug is wrong...because I can't repo it
[13:25] <utlemming_sprint> rbasak: the versions are there but apache installs correctly
[13:25] <rbasak> utlemming_sprint: yes, apache does install correclty
[13:25] <rbasak> utlemming_sprint: a subseqent install of libapache2-mod-wsgi-py3 fails though as described
[13:25] <utlemming_sprint> rbasak: I think that the likely story is out of date apt cache
[13:25] <rbasak> utlemming_sprint: it's not that the apt cache is out of date
[13:26] <rbasak> utlemming_sprint: it is that the docker image ships a package from trusty-proposed without trusty-proposed in sources.list.
[13:26] <rbasak> utlemming_sprint: (and it shouldn't have a package from trusty-proposed anyway)
[13:26] <utlemming_sprint> rbasak: I'm checking how docker does their builds
[13:27] <rbasak> utlemming_sprint: is it something you deliver? Or not in our (Ubuntu) hands?
[13:27] <utlemming_sprint> rbasak: we deliver the base image and then they do somethings to it
[13:27] <rbasak> I see, OK
[13:27] <utlemming_sprint> rbasak: they as in docker
[13:28] <rbasak> So I guess the question is whether the base image includes Python from trusty-proposed or trusty-proposed in sources.list.
[13:28] <utlemming_sprint> rbasak: so the image that we delivered to Docker has 3.4.0-2ubuntu1.1
[13:28] <rbasak> utlemming_sprint: OK, so it sounds like they're doing something broken
[13:28] <rbasak> utlemming_sprint: I'll ask kirkland to pass to Docker upstream.
[13:28] <rbasak> utlemming_sprint: thanks
[13:29] <utlemming_sprint> rbasak: that is my $0.02. the cloud image team does have relationships with the docker guys
[13:29] <utlemming_sprint> rbasak: I had lunch with them on Thursday last, incidently.
[13:39] <Odd_Bloke> rbasak: utlemming_sprint: I think this is caused by the Python 3.4.3 release to and subsequent removal from the trusty archive.
[13:39] <Odd_Bloke> rbasak: utlemming_sprint: So this isn't to do with -proposed ending up on the image, it's just the image being created in that window of failure.
[13:39] <utlemming_sprint> Odd_Bloke: you're right
[13:40] <Odd_Bloke> So I'll orchestrate getting a new image out, but we know the root cause. :)
[13:40] <utlemming_sprint> Odd_Bloke: See https://partner-images.canonical.com/core/trusty/20151001/ubuntu-trusty-core-cloudimg-amd64.manifest (which matches the docker image) and https://partner-images.canonical.com/core/trusty/20151009/ubuntu-trusty-core-cloudimg-amd64.manifest which is the latest
[13:42] <rbasak> Odd_Bloke: whoa. We did that? I'm surprised it was pulled in that way instead of putting a reverted higher version in trusty-updates. Thanks.
[13:42] <utlemming_sprint> Odd_Bloke: do you have a bug number handy?
[13:42] <rbasak> https://bugs.launchpad.net/ubuntu/+source/python3.4/+bug/1500768 is the regression bug
[14:58] <jak2000> i need always type: sudo route add default gw 192.168.0.1     but wich fileneed modify for make it permanently?
[15:00] <lordievader> jak2000: I guess this is for a static ip? Add it to /etc/network/interfaces.
[15:01] <jak2000> lordievader: http://pastie.org/10476820
[15:02] <lordievader> jak2000: Exactly, there is no gateway defined.
[15:03] <jak2000> yes
[15:03] <jak2000> how to define?
[15:03] <jak2000> gateway 192.x.x.x
[15:03] <jak2000> ?
[15:05] <jak2000> done
[15:06] <jak2000> after 2,3 minutes my ubuntu server sleep, how to fix it? i dont want sleep, any advice?
[15:36] <jak2000> lordievader?
[15:38] <RoyK> jak2000: gateway 192.168.0.1 perhaps
[15:39] <jak2000> done
[15:40] <jak2000> RoyK i have installed ubuntu 14.04 basic, no gui installed but why sleep after 2-3 minutes i think is a full sleep bbecause not answer hte ping
[15:40] <RoyK> no idea - try a reboot to see if the gateway is right
[15:43] <jak2000> ok
[17:12] <lordievader> jak2000: You could easily find this out yourself: https://help.ubuntu.com/lts/serverguide/network-configuration.html#ip-addressing
[17:26] <linocisco> how to connect USB Modem to enable internet on ubuntu 14.04 server?
[17:26] <linocisco> what are the settings to configure where?
[17:38] <linocisco> how to connect USB Modem to enable internet on ubuntu 14.04 server?
[17:38] <linocisco> what are the settings to configure where?
[17:43] <linocisco> hi how to connect USB Modem to enable internet on ubuntu 14.04 server?
[17:48] <linocisco> hi how to connect USB Modem to enable internet on ubuntu 14.04 server?
[17:49] <lordievader> !patience | linocisco
[17:50] <teward> !crosspost | linocisco
[17:51] <linocisco> teward, I didn't really want to. I am using well community supported ubuntu with big hope from community
[17:55] <teward> linocisco: you still shouldn't crosspost.
[17:57] <linocisco> teward, I will use centos .bye ubuntu server
[17:58] <lordievader> I guess that is one way to deal with your problems...
[17:58] <teward> my guess from them in #ubuntu they're already using centos
[19:21] <cisconinja> good evening everyone
[19:21] <cisconinja> I am havining a little delima with rsyslog, would i be able to get assistance here?
[19:23] <lordievader> !ask | cisconinja
[19:28] <cisconinja> ok, i am trying to use rsyslog from my cisco device (192.168.1.49) and my rsyslog (192.168.1.46). I followed this tut: http://tinyurl.com/npgx6r8. I don't see anything being recorded inside my log file /var/log/cisco/cisco.log. however, i see traffic generated from my cisco device using tcpdump, and it is getting to the right port as well! http://pastebin.com/kxacY5Fk. What am i missing or doing
[19:28] <cisconinja> wrong. TIA
[19:48] <coreycb`> jamespage, neutron 7.0.0~rc2-0ubuntu2 uploaded
[19:49] <coreycb`> jamespage, that should fix some dep 8 issues
[19:55] <cisconinja> nm i figured out what was my problem . thank you all
[20:42] <hallyn> dannf: ok, i put a first merge attempt of debian's 2.4 qemu to https://launchpad.net/~serge-hallyn/+archive/ubuntu/qemu-vgicv3  .  i have not yet added your patch, will do that tonight or in the morning if htis merge doesn't bom
[20:42] <hallyn> bomb
[20:42] <hallyn> which i somewhat expect it will
[20:43] <hallyn> for now, i've worked more tha the half day i was planning, so /me out
[20:43] <dannf> hallyn: fair enough - thanks!
[21:46] <med_> smoser, utlemming_sprint : if two bootable volumes are presented to a kvm (openstack) vm, shouldn't the instance still boot from vda (instead of vdb)? And I'm guessing cloud-init isn't remotely involved in this, so not sure why I"m asking you two. More of a kvm or nova issue.
[21:46]  * med_ was clearly thinking out loud ^
[23:19] <IPU> hi there :)
[23:21] <IPU> i've running 14.04.3 lts on a 64gb emmc. Server purpose would be a webserver for a groupware installation for 5-10 users (apache2, mysql, php, postfix, dovecot) and i've also was planning to add a nagios instance sometime later... any purposes concerning the partitioning scheme?
[23:23] <IPU> i thought to add at least separate partitions for /home /mail /var /tmp but im quite unsure 'bout the sizes
[23:26] <IPU> also thought about to move the /var and /mail partitions to an external usb3 hdd to avoid the wearing of the emmc but i've no experiences concerning the performance
[23:27] <JanC> /tmp should probably be a tmpfs (which is the default IIRC)
[23:28] <IPU> yeah it's default
[23:29] <JanC> not surewhy you need /mail
[23:30] <JanC> or even why you need partitions  :)
[23:31] <JanC> or at least, why you need multiple "user partitions "
[23:31] <JanC> just put everything on /home or /srv
[23:33] <IPU> primary for security reasons and to avoid a total system crash if for example a deamon is running mad and fill up the partition with trash
[23:34] <jpds_> IPU: System shouldn't crash, that's what the reserved blocks are for
[23:35] <JanC> jpds_: well, unless daemons run as root, I guess ;)
[23:36] <JanC> but, I think logs & mysql databases would be the only likely culprits for that
[23:37] <JanC> and it's possible to put user databases and user logs in their respective (virtual) home directories
[23:37] <IPU> i've often seen systems getting unaccsesible cause the entire diskspace is eaten up during malfunction or let's say during an attack
[23:38] <jpds_> IPU: Just reading the first paragraph of http://www.howtogeek.com/196541/emmc-vs.-ssd-not-all-solid-state-storage-is-equal/ would make me not put that on a server
[23:39] <JanC> jpds_: you probably also wouldn't use a USB3 HDD on a server :)
[23:39] <jpds_> JanC: I wouldn't, no
[23:42] <jpds_> IPU: Also, I wouldn't cramp so much stuff on the same box but that could just be me
[23:42] <RoyK> jpds_: For example, the SSD controller spreads read and write operations over all the memory chips in the solid-state drive
[23:42] <RoyK> erm
[23:42] <RoyK> you don't spread reads - only writes
[23:43] <JanC> so, it would be be useful to know if this is a company thing or some family/student thing  :)
[23:44] <JanC> RoyK: some expensive SSD might do internal RAID where spreading READs would be useful, but yeah  :)
[23:44] <IPU> jpds: uhm... i don't know what kind of emmc's you know but in my system i made with a class 10 sd 'bout 8,5MB/s write and 18,9Mb/s read... my emmc makes 39.3mb/s write and 140mb/s read
[23:45] <JanC> some *very* expensive SSDs
[23:45] <jpds_> IPU: My SSD does 550MB/s read, 500MB/s write
[23:46] <RoyK> eMMCs could be nice for tiering, though
[23:46] <RoyK> if there was an open source tiering solution out there
[23:47] <jpds_> Like, bcache?
[23:47] <RoyK> no, like btier
[23:47] <RoyK> tiering isn't caching
[23:47] <IPU> jpds_: i would also take a ssd instead if i would have the appropriate interfaces for it ^^
[23:48] <JanC> so, this really sounds like a home thing, right?
[23:48] <IPU> jpds_: but the hardware on which i plan this server has only sd and emmc ;-)
[23:48] <IPU> it's more like a test for a very small and low power consuming system
[23:49] <IPU> http://www.hardkernel.com/main/main.php
[23:49] <RoyK> tiering is keeping cold data on slow storage - caching is just keeping hot data temporarily duplicated on fast storage, it's not moving things around according to use
[23:49] <IPU> for those who know
[23:51] <JanC> RoyK: what's the benefits?
[23:52] <RoyK> JanC: well, if you have, say, 10TB of data, usually 10% or perhaps 20% of the data is "hot", so you want to store that on fast storage, like a raid-1+0 on fast disks or SSDs and the rest on slower storage, say 7k2 drives in raid-6
[23:52] <RoyK> JanC: with multiple tiers, that can be very beneficial
[23:53] <RoyK> JanC: uio.no has 10PiB or so ranging from SSDs for the hottest part to tape (40%) for anything not used the last 6 months or so
[23:53] <JanC> but what's the benefit over caching?
[23:54] <RoyK> caching won't last over a reboot
[23:54] <RoyK> tiering is in the storage itself
[23:54] <JanC> that depends on how you cache
[23:54] <RoyK> and caching is just duplicating things, not moving the data to faster tiers
[23:54] <JanC> lots of caches persist over reboot
[23:55] <RoyK> caching is caching, it's not tierd storage
[23:55] <JanC> well, you copy them to faster tiers
[23:55] <JanC> instead of moving
[23:55] <JanC> wich should be faster actually  ;)
[23:56] <RoyK> but then, say, you have a tier 1, ssd on pcie, tier 2, ssd on sata, tier 3, 10k sas drives, tier 4, 7k2 drives, tier 5, tape
[23:56] <RoyK> hitachi makes those things
[23:56] <RoyK> and they really work well
[23:56] <IPU> cern is also using a tiering system
[23:56] <RoyK> costs a lot, but then, if you need a bunch of petas, that costs a bit
[23:57] <RoyK> IPU: any idea what sort of storage system they use?
[23:57] <JanC> IPU: CERN's "tiering system" is a caching system
[23:58] <JanC> they don't delete data from their tapes when they copy them to HDD or SSD
[23:59] <JanC> I'm pretty sure Hitachi's "tiering" system actually is a caching system
[23:59] <IPU> http://home.web.cern.ch/about/computing/grid-system-tiers
[23:59] <RoyK> JanC: deleting data from tape takes a while and if the data isn't modified on the upper tier, it makes no sense to remove it from tape
[23:59] <JanC> it would be silly to do otherwise
[23:59] <JanC> RoyK: so then they just "cache"
[23:59] <JanC> mostly
[23:59] <RoyK> JanC: but it makes sense to *move* data from lower tiers like 7k2 drives to 10k/15k to ssds