Friday, November 7, 2014

Daily Freelance Steps

So you wanna freelance? Here is what I have to do every day.

Check out my feeds for work:

Craigslist
Linked In
Email
Text customers and ask them how things are going
Facebook
Odesk
Freelancer

Once I find work:

Contact each customer in a personal manner and provide contact, rate and project ideas.
Spend the rest of the work day on  current projects, paying bills, coordinating meetings and posting ideas to indiegogo and working on innocentive challenges. woot.

Tuesday, September 9, 2014

Wifi Capacity or Coverage?

http://www.sniffwifi.com/2014/09/capacity-or-coverage-or-neither.html?m=1

Friday, September 5, 2014

Capacity or Coverage or Neither?

In the beginning, there was Coverage.  And so it was that 802.11 and his only begotten Son, WiFi were blessed upon PCMCIA cards who doth receiveth adequateth Coverage.

And then as Coverage grew and the lands of Tablets wereth discovered, so came Capacity.  And thus did Capacity grow to represent all that was good and great about deploymenteths upon this fruitful land.

And now, my Sons and Daughters, things have changeth again.   For Coverage and Capacity will both leave the Higheth of Densiteth WiFi wanting.  And so we shun them both.  For it is Neither -- Coverage nor Capacity -- that will taketh thy to the WiFi promised land.

In case it was unclear, designing wireless LANs for Capacity has become an article of faith in some circles.  Keep it to 40 devices per AP.  Or 50.  Or 150.  Whatever the number is, the whole concept is misguided.

WiFi uses radio frequency as its physical layer, and there is a finite amount of radio frequency in any given location.  If every radio frequency channel gets used up, then adding more access points to fulfill a Capacity requirement becomes counter-productive.  It becomes the equivalent of adding a Hub to handle a high density wired LAN.  A hub doesn't add additional access.  It just splits up the existing access; often degrading the quality of the network.  Same thing for adding APs to occuppied channels.

A lot of people understand that having more than one AP on the same channel is a problem, and that's good.  The problem is that some of those people go about designing a wireless network for one-AP-per-channel in the wrong way.

Imagine that the diagram below represents the APs that are on channel one in my high density area:


See that nice, green check mark?  That means you've surveyed the room and confirmed that your two APs on channel one don't interfere with one another.  You've (theoretically) added Capacity to your wireless LAN.  Now twice as many users can get on channel one (theoretically).

The problem with the above picture is that it is often accomplished by turning down the power on APs.  DON'T DO THAT.  It usually results in a Capacity-based design failing once put under stress.

Here's what happens when a Capacity design with low AP transmit power starts to get crowded:


Consumer devices (smartphones, tablets, laptops, etc.) typically don't lower their transmit power to the level of APs.  That ends up ruining your previously-thought-to-be-solid design.  Users start to connect.  Users' devices start transmitting data.  The radio frequency from that data bleeds into another APs' coverage area.  And the end result is that users in the bleed area get interfered with.  As the room fills up with users, this interference starts to happen so often that the wireless becomes subpar.

When designing WiFi, it's best to ignore both Coverage and Capacity.  Instead, just stick to radio frequency principles.  Keep AP transmit power around 12 to 15 dBm (because that's the range that most consumer devices transmit within).  Make sure that no more than one AP on a given channel is covering any area.  And with those two principles in mind, try to make sure that your signal is strong, your coverage only goes where your users are and you create enough overlap if users expect to stay connected while moving from place to place.  Just like the 802.11 Creators intended it.

Thursday, September 4, 2014

ICQ Windows client to linux client centerim for basic remote commands on linux shell

This is a quick proof of concept to enable basic remote administration via shell commands to a Debian 7 server running centerim and piping the commands to /bin/sh and then back to centerim to be echoed into the Windows ICQ client. First install:

apt-get install inotify-tools

script file in /root where xxxxxxxx is the ICQ client giving commands and receiving output.

notifyme
#/bin/sh
screen -S "CenterimRX" -dm centerim
while inotifywait -e close_write /root/.centerim/xxxxxxxx/history
         do
                tail -5 /root/.centerim/xxxxxxxx/history | grep secretcommand > /root/runme
                echo "it changed"
                        if grep -q secretcommand /root/runme
                                then
                                echo "Command Found"
                                cat /root/runme | awk 'BEGIN {FS="^"} ; {print $1}' | /bin/sh | centerim -s msg -p icq -t xxxxxxxx
                                else
                                echo "No Command To Run"
                        fi
 done

Configure the centerim client in /root/.centerim/

To us this, send a test message to the linux server ICQ user id:
ls -la^secretcommand

The script will watch the centerim chat history for a particular remote admin user ICQ UID and parse out the word secretcommand and notice a command being armed. The ^ after the command allows us to use spaces for more complicated commands. The ^ being a field separator for awk.

Input:
rx : 4 September at 16:56 :
ifconfig^secretcommand

tx : 4 September at 16:57 :
eth0      Link encap:Ethernet  HWaddr 00:16:41:3b:9a:2c
          inet addr:10.0.1.25  Bcast:10.0.1.255  Mask:255.255.255.0
          inet6 addr: fe80::216:41ff:fe3b:9a2c/64 Scope:Link
          UP BROADCAST RUNNING MULTICAST  MTU:1500  Metric:1
          RX packets:55628 errors:0 dropped:0 overruns:0 frame:0
          TX packets:17733 errors:0 dropped:0 overruns:0 carrier:0
          collisions:0 txqueuelen:1000
          RX bytes:6467849 (6.1 MiB)  TX bytes:2424223 (2.3 MiB)
          Interrupt:16 Memory:d0080000-d00a0000

lo        Link encap:Local Loopback
          inet addr:127.0.0.1  Mask:255.0.0.0
          inet6 addr: ::1/128 Scope:Host
          UP LOOPBACK RUNNING  MTU:16436  Metric:1
          RX packets:1014 errors:0 dropped:0 overruns:0 frame:0
          TX packets:1014 errors:0 dropped:0 overruns:0 carrier:0
          collisions:0 txqueuelen:0
   
     RX bytes:60880 (59.4 KiB)  TX bytes:60880 (59.4 KiB)

Monday, September 1, 2014

esxi ipmi_si_drv

Waiting for loading ipmi_si_drv - disable IPMI on boot?

This question has been Answered.
juergensmdLurker

Hi,
I am using an Supermicro X8SIL (without -F), a Xeon L3426, 16GiB ECC reg., ESXi 5.5!
This board does not support IPMI, and I think there is no way to install an add-on card.
The system is booting fast til the ipmi_si_drv is loading - after 10min. wait time - the system is booting fine and works without any problems.
In the vsphere client the IPMI status is green! That's funny?
Is there any chance to minimize the wait time of the IPMI driver while booting?
Is there a modified ISO?
Is it possible to disable the IPMI driver - I does not need this feature.

Best regards,
Matthias
MKguy
Correct Answer by MKguy  on Oct 23, 2013 1:11 AM
You can uninstall the ipmi_si_drv VIB as follows:
First a test-run to make sure only this VIB is affected:
# esxcli software vib remove --dry-run --vibname ipmi-ipmi-si-drv
Removal Result
   Message: Dryrun only, host not changed. The following installers will be applied: [BootBankInstaller]
   Reboot Required: true
   VIBs Installed:
   VIBs Removed: VMware_bootbank_ipmi-ipmi-si-drv_39.1-4vmw.510.1.12.1065491
   VIBs Skipped:

Now for the actual removal:
# esxcli software vib remove --vibname ipmi-ipmi-si-drv

Then reboot the host.
You might have to put the host into maintenance mode before removing the VIB as well.
Helpful Answer by juergensmd 
  • Correct Answer1. Re: Waiting for loading ipmi_si_drv - disable IPMI on boot?
    MKguyMaster
    You can uninstall the ipmi_si_drv VIB as follows:
    First a test-run to make sure only this VIB is affected:
    # esxcli software vib remove --dry-run --vibname ipmi-ipmi-si-drv
    Removal Result
       Message: Dryrun only, host not changed. The following installers will be applied: [BootBankInstaller]
       Reboot Required: true
       VIBs Installed:
       VIBs Removed: VMware_bootbank_ipmi-ipmi-si-drv_39.1-4vmw.510.1.12.1065491
       VIBs Skipped:

    Now for the actual removal:
    # esxcli software vib remove --vibname ipmi-ipmi-si-drv

    Then reboot the host.
    You might have to put the host into maintenance mode before removing the VIB as well.
  • Helpful Answer2. Re: Waiting for loading ipmi_si_drv - disable IPMI on boot?
    juergensmdLurker
    Thank you very much,
    I will try this tonight.

    For putting in maintenance mode:
    Alt + F1, root access
    # vim-cmd hostsvc/maintenance_mode_enter
    then I will follow with your explanation above.

    That would be nice, I spent a lot of time last night....

    Best regards,
    Matthias


  • 3. Re: Waiting for loading ipmi_si_drv - disable IPMI on boot?
    juergensmdLurker
    Hi,
    I have to say: Thank you very, very much!!!
    One minute, and all was done! Reboot was quick, ESXi 5.5 was ready in three minutes and working fine!

    Thanks a lot!

    Best regards,
    Matthias
  • 4. Re: Waiting for loading ipmi_si_drv - disable IPMI on boot?
    abrehmcLurker
    You can add the ASUS KCMA-D8 motherboard to that list.
    I just upgraded to 5.5, and it still hangs (allthough it was slated to be fixed in 5.5).
    It WILL continue after 20 minutes or so, but it is still unacceptable to remove the entire Intelligent Platform Management Interface to fix this.
    That module should time out quicker and auto disable the functions with a warning.

    This problem has been here as long as I can remember - maybe it's time to get it fixed?
  • 5. Re: Waiting for loading ipmi_si_drv - disable IPMI on boot?
    joshuatownsendEnthusiast vExpert
    Add my home lab Dell C6100 (XS23-TY3) with L5639 to the list with ESXi 5.5 - I waited 24 hours and the boot process remained stuck at loading impi_si_drv...  Reinstall and remove the IPMI VIB....  bummer.
  • 6. Re: Waiting for loading ipmi_si_drv - disable IPMI on boot?
    samueltowleLurker
    Joshua,

    I just encounter the same thing on our C6100.
    Node 3 had a mauve screen of death.  Then would not boot past ipmi_si_drv.
    I tried setting the IPMI to "Shared" in the BIOS on your C6100 and it booted normally.

    HTH

    Sam
  • 7. Re: Waiting for loading ipmi_si_drv - disable IPMI on boot?
    codycookLurker
    Ran into about the same but Supermicro X8SIE and 32 GB RAM. We also experience the same issue with a long delay on ipmi_si_drv  which was not present in 5.0 or 5.1.  We attempted to remove the ipmi_si_drv from install but did not like it. We do not have IPMI options in the BIOS menu, therefore it makes it impossible to toggle anything related to IPMI. Try pressing Shift + O and append noipmiEnabled to the boot args. Once booted, connect with vSphere and add uncheck VMkernel.Boot.ipmiEnabled.
  • 8. Re: Waiting for loading ipmi_si_drv - disable IPMI on boot?
    MidusNovice vExpert
    My experience was different. It hung if set to "shared", setting it back to "dedicated" solved the issue for me.

    I'm on BIOS 1.71, ESM 1.33 and FCB 1.20v2

    Message was edited by: Kenneth Chan (midus)

    Update 17th April 2014:

    Hmmm... on ESXi build 1623387 if "Set BMC NIC" is set to "dedicated" it will hang. Set it to "Shared" as recommended. BTW did try disabling the IPMI .vib (uncheck, set to false for VMkernel.Boot.ipmiEnabled). However, doing so all extended hardware monitoring, fan speeds or temperature, etc, will no longer be visible.