[00:04] smoser: hi o/ .. someone facing an ec2 image error, let me know any thoughts you may have and I'll relay http://ubuntuforums.org/showthread.php?t=1748533 [00:07] Kyle__: google [03:34] * Heartsbane curses euca_conf because it took him a half hour to figure out how to swap nodes. [03:34] :| [03:35] Auto-detecting the nodes, or setting them manually? [03:35] manually [03:35] I had one with bad credentials too [03:37] How many NCs in your setup? [03:37] 2 but my boss gave me a bigger blade and took my old :( [03:39] which meant I need reconfigure for 2 blades and well it is fixed now... I going to have a beer [03:39] Enjoy. === koolhead11|afk is now known as koolhead11 [08:10] hi [08:10] I am not able to mount the volume which is created from snapshot [08:11] sorry to hear that shahid_ , whats the problem? [08:11] getting this error at the time mounting "mount: wrong fs type, bad option, bad superblock on /dev/sdb1," [08:11] it means what it means [08:11] is it formatted? [08:12] no [08:13] you can't mount something that is not formatted [08:13] I can mount if I format it [08:13] but its a snapshot volume. If format it I will loose my data. [08:14] look at fdisk -l [08:14] you will see the partitions [08:14] fdisk -l output : /dev/sdb1 1 51200 52428784 83 Linux [08:14] what command are you using to mount [08:15] sudo mount /dev/sdb1 /data [08:16] This is the syslog error : "XT4-fs (sdb1): bad geometry: block count 13107196 exceeds size of device (10485756 blocks)" [08:16] EXT4-fs [08:16] thats your problem for sure [08:17] try an fsck /dev/sdb1 [08:18] Getting the bellow error [08:19] fsck.ext4: Can't read an block bitmap while retrying to read bitmaps for /dev/sdb1 [08:19] e2fsck: aborted [08:21] looks like the part is screwd in some way [08:22] try another restore from snap to a new to vol to check if the vol creation was the prob or not [08:24] tried 4,5 times getting same error [08:24] part and/or fs is screwd in the snap [08:35] flassid: I reduced the size of the volume when I created volume from the snapshot. Is that the problem [08:35] not if it fits inside gemoetry which in this case it may not have [08:38] ok, [08:50] flaccid: fixed the problem [08:51] sweet, what you do? [08:51] flaccid: we cannot reduce size of the volume. When I increased the size it worked [08:52] hmm maybe i'm wrong and you can only increase [08:52] yeah would make sense considering its an image [08:52] yeah sorry, its a disk map so as long as the volume is big enough it will fit [08:53] flaccid: yes [08:53] rsync between volumes to go to small volume; then snap [08:53] my bad on that one; long day [08:54] flaccid: ok, Thanks for your time & cooperation [08:54] np === daker_ is now known as daker [10:04] flaccid: I am facing on e problem [10:05] created new volume (60GB) from snapshot but but after mounting it is showing 50GB(size of snapshot) [10:06] insted of 60Gb [10:06] executed sudo resize2fs -f /dev/sdb1 [12:49] hallyn, you have a pastebin of that or something ? [14:20] smoser: is there a simple way to create a ramdisk eri without installing the kernel .deb? [14:20] hallyn, no. [14:21] guess i'd just have to hack on mkinitrd a bit [14:21] ok, thanks. [14:21] well, yah, mkinitrd would do it. [14:21] but that assumes you have /lib/modules/ and such [14:22] right, i'd want to hack mkinitrd to take an alternative directory as argument [14:22] so i can dpkg -x kernel.deb and then pass x/lib/modues to mkinitrd [14:22] not really worth it :) [14:23] hallyn, i looked into doing that once [14:23] and basically gave up [14:23] :) === dendrobates is now known as dendro-afk === dendro-afk is now known as dendrobates [17:08] hi all [17:10] o/ === niemeyer is now known as niemeyer_lunch [17:55] hey guys [17:55] I have a few questions [17:55] first all diskspace of all nodes is used and managed by the walrus controller or? [17:55] and second: I can't run images using hybridfox [17:55] it always returns an error [17:55] runinstance error or bucket error [17:55] *createvloume [17:56] *createvolume [17:58] scalability-junk: hi, not really, each instance has its own internal storage. Walrus provides an s3 like key-value store. And the StorageController (SC) provides mountable volumes over the network [17:59] but walrus storage is just available to the node running eucalyptus-walrus? [17:59] hi btw [18:03] the NC will use the local disk space as cache and for the instance store [18:04] the SC provides EBS supports to the instances [18:04] and Walrus provides S3 API (get-put interface) [18:04] those are 3 different kind of storage available to the insstance [18:05] typically the component (W, SC and NC) will use the local disk as directed by the cloud administrator [18:05] ah ok [18:05] and for hybridfox: which version are you running? [18:05] the latest [18:05] version? [18:06] 1.7.000047 [18:07] I have not tried but it should work: do you get errors for each command? [18:07] for creating an instance i get imageverify error and EC2 responded with an error for RunInstances [18:08] and you are sure you are pointing hybridfox to your Eucalyptus cloud and not EC2? [18:08] and for s3: EC2 responded with an error for createvolume and nothing else is shown [18:08] yeah [18:08] I can see my security group [18:09] + my downloaded images from the store [18:14] I don't know why its not working === dendrobates is now known as dendro-afk [18:25] any suggestions obino? [18:34] scalability-junk: I'm still confused if you are talking with Eucalyptus or EC2 [18:34] did you change the endpoint if you use eucalyptus? [18:35] if command line works, you are probably missing a step [18:35] the endpoint url? [18:35] yeah [18:36] hmm .. not sure then. Hybridfox works pretty well for me [18:36] mh damn [18:36] which version of Eucalyptus or UEC are you using? [18:36] firefox 4? [18:36] 10.10 [18:37] I'm still on firefox 3 [18:37] not sure if that is an issue though [18:37] let's try === daker is now known as daker_ [18:43] seems to be my uec installation [18:43] thanks === niemeyer_lunch is now known as niemeyer [18:53] scalability-junk: what is the problem? [19:01] Hi everyone [19:01] Welcome to the weekly Ensemble cloud community meeting [19:01] jimbaker: Can you get us started please [19:01] kim0, sounds good [19:02] the team is preparing for our budapest milestone. you can see our current progress here: http://people.canonical.com/~niemeyer/budapest.html [19:02] Hi there! [19:03] Hey [19:03] some of these bugs in uncategorized should not be there, btw. however, the other columns reflect the current progress [19:03] :) [19:03] jimbaker: can you comment on what has been accomplished since last meeting ? [19:03] di3gopa: hi there [19:04] kim0, i think the highlight for me was reestablishing the stability of ensemble. [19:04] this was somewhat confounded by the AWS outage [19:05] I remember back then, you were trying to get Ensemble running in multiple regions ? [19:05] however, it was more of an issue of keeping our dependencies current, in particular as i recall, the python zookeeper bindings [19:05] CurtisElgin: Hi [19:05] once that was fixed, we were able to see multiregion support, which is now in trunk [19:06] jimbaker: so right now .. we do have multi-region support [19:06] kim0, correct [19:06] how can I launch in eu-west for example [19:06] kim0, yes [19:06] what do I need to do that ? [19:07] kim0, you need to specify the region setting in environments.yaml [19:08] Okay sounds good [19:08] that's great .. so we're liberated from us-east now [19:08] Amazon can fail all they want now [19:08] jimbaker: anything else to add [19:08] by default it is us-east-1, but you can specify other regions like us-west-1, etc [19:08] i will see if i can dig up the comprehensive list [19:09] kim0, progress is also being made on the following issues: [19:09] 1. automatic dependency resolution of services [19:09] 2. service configuration settings [19:10] 3. exposing of ports, so we can move away from the current "all ports are open" firewall policy [19:10] these features are not merged into trunk yet, right ? [19:11] kim0, some of the supporting functionality for 2 and 3 is now merged into trunk, but not yet the full functionality [19:11] the team is working hard on 2 and 3 to see that they are available at budapest [19:11] great! [19:11] we will all be there at UDS [19:11] awesome [19:12] jimbaker: thanks a lot [19:12] switching to hazmat [19:12] kim0, thanks for giving me the floor [19:12] anything to add? [19:12] jimbaker: you covered everything right ? [19:12] like the whole team :) [19:13] kim0, that should have been the highlights for everybody, but i'm sure bcsaller and hazmat can add more to the discussion [19:13] cool .. throwing the ball to hazmat first [19:13] anything to add here [19:13] * hazmat ponders [19:14] * kim0 waves at everyone [19:14] kim0: I think that was good coverage as far as current activity goes [19:14] Anyone with questions or comments about Ensemble [19:14] sounds good [19:14] great progress indeed [19:14] We're really focused on having something good for UDS now [19:14] Yes that makes sense [19:14] We hope that past UDS we should have a stable release with those features in a more consumable fashion [19:15] niemeyer: are we having dedicated ensemble uds sessions [19:15] I think I only spotted one [19:15] kim0, here are the currently supported regions: us-east-1, us-west-1, eu-west-1, ap-northeast-1, ap-southeast-1 [19:16] jimbaker: I think that's all the regions that exist :) [19:17] Anyone new or having questions this time ? please say hi or ask your question [19:17] kim0, sounds good then! next up will be covering availability zones, but we need to have a better process for that (too manual at this point) [19:18] jimbaker: an Ensemble formula however all lives inside on region correct [19:18] one* [19:18] kim0, we only support ensemble at this time in one region [19:18] With the ec2 failure, many are considering spreading out their deployments [19:18] an obvious thing for us to support is doing this across availability zones (where the latency is low) [19:19] Yeah [19:19] Awesome [19:19] and then across regions. however high latency between regions makes that not desirable with zookeeper, which is a foundational technology for ensemble [19:19] - open floor - [19:19] Any questions or comments from anyone is welcome [19:20] jimbaker: Thanks for all the info :) [19:20] looking forward to see'ing the whole team at UDS [19:20] there are some current patches for zookeeper, plus ongoing discussion, that may allow for zookeeper to support what we need to do that, via a delegated model [19:20] that's good to know [19:21] see https://issues.apache.org/jira/browse/ZOOKEEPER-892 for the true bleeding edge :) [19:21] hehe :) [19:22] Okie [19:22] I think that's all [19:22] thanks everyone! [19:23] MennaEssa: Hi there [19:27] MennaEssa: Hi again :) [19:28] di3gopa: hi there .. first time around here? === ahs3 is now known as ahs3-lunch [20:02] hi,... i've been looking all over the net and I cannot seem to find answers to my question.. this is the first time I try cloud and I am completely stuck here euca-run-instances my instance never changes away from pending,.. yesterday one had quit and that was it === dendro-afk is now known as dendrobates [20:09] It seems the more i look the more confused i'm becoming concerning all of this... i got an account on rightscale, the node and controller detect each other, downloaded an image,... and stuck there === dendrobates is now known as dendro-afk === dendro-afk is now known as dendrobates === dendrobates is now known as dendro-afk === dendro-afk is now known as dendrobates === ahs3-lunch is now known as ahs3