Optimizing Internet Connection Using Internet Cyclone - Increase Internet Speed in Windows

Internet Cyclone  Is a powerful Internet tool for Windows 7,  9x, NT, 2000, 2003 and XP created to modify your Windows registry settings in order to speed up internet connection Speed up to 200%.
After you upgrade your internet connection you can still use the software because It is fully compatible with all hardware.

Internet Cyclone Main Features:
Speed up internet to 200% in just a few seconds.It's compatible with all modems and high-speed LAN, ISDN, CABLE, DSL, T1 or other internet connections .It will speed up video streaming like Youtube, Metacafe, Google, etc.It will speed up web surfing, online gaming, e-mailing, etc Click Here For Download Internet Cyclone



View the Original article

OpenLDAP Everywhere Reloaded, Part I

May 23, 2012 By Stewart Walters  inHOW-TOs

Directory services is one of the most interesting and crucialparts of computing today. They provide our account management, basicauthentication, address books and a back-end repository for theconfiguration of many other important applications.

It's been nine long years since Craig Swanson and Matt Lung originallywrote their article "OpenLDAP Everywhere"(LJ, December 2002),and almost six years since their follow-up article "OpenLDAP EverywhereRevisited" (LJ, July 2005).

In this multipart series, I cover how to engineer an OpenLDAP directoryservice to create a unified login for heterogeneous environments. Withcurrent software and a modern approach to server design, the aim is to reducethe number of single points of failure for the directory.

In this article, I describe how to configure two Linux servers to host core network servicesrequired for clients to query the directory service. I configure thesecore services to be highly available through the use of failover poolsand/or replication.



View the Original article

Hack and / - Password Cracking with GPUs, Part II: Get Cracking

May 29, 2012 By Kyle Rankin  inHOW-TOsSecurity Your hardware is ready. Now, let's load up some software and get cracking.

In Part I of this series, I explained how password cracking works in general terms anddescribed my specific password-cracking hardware. In this article, I diginto the software side of things and describe how to put that hardware touse cracking passwords. I also discuss the two main types of attacks:dictionary and brute-force attacks. As I describe each attack, I alsogive specific examples of how I used the software to attack phpass, thehashing algorithm currently used for PHP-based software like WordPress.

For the purposes of this article, I created a sample WordPress blog onmy server and created a few custom accounts—some with weak passwordsand others with truly random passwords. Then, I went into the databasefor the site and pulled out the phpass password hashes for each accountand put them into a file that looked like this:$P$BpgwVqlfEwuaj.FlM7.YCZ6GQMu15D/$P$BGMZP8qAHPjTTiTMdSxGhjfQMvkm2D1$P$BOPzST0vwsR86QfIsQdspt4M5wUGVh.$P$BjAZ1S3pmcGOC8Op808lOK4l25Q3Ph0$P$BPlIiO5xdHmThnjjSyJ1jBICfPkpay1$P$BReStde51ZwKHVtiTgTJpB2zzmGJW91

The above hashes are legitimate phpass hashes created from six-characterpasswords. I could tell you the passwords, but that would defeat thefun of cracking them yourself.

Proprietary Video Drivers

For those of you who, like me, believe in open-source software, this nextsection may be a bit disappointing. To get hardware-accelerated password-cracking software working on your system, you need to install theproprietary video drivers from either AMD or NVIDIA. That said, if youalready have been using your system for Bitcoin mining, you already havethe drivers and libraries you need, so you can skip to the next sectionabout Hashcat. Honestly, you also could just follow the Bitcoin miningHOWTOs for Linux, and that would describe how to get all the drivers andlibraries you need.

Many modern desktops make it relatively easy to pull down and install theproprietary video drivers. For instance, an Ubuntu desktop will promptyou that restricted drivers are available to install both for AMD andNVIDIA cards. Most other popular distributions provide good documentationon how to pull down the proprietary drivers as well. In the worst case,you may have to download the software directly from the AMD or NVIDIA andinstall it that way—they both have clear instructions and softwareavailable for Linux just like for other OSes.

Once you have the proprietary drivers installed, you also needthe AMD APP SDK for its OpenCL libraries or the NVIDIA CUDAlibraries, depending on who made your video card. You likely will need toget these librariesdirectly from AMD or NVIDIA Web sites. Theinstall is straightforward though. In my case, I downloaded theAMD-APP-SDK-v2.5-lnx32.tgz file from AMD, extracted it, and ran theprovided Install-AMD-APP.sh shell script as root.

Hashcat

Many different password-cracking suites exist both for CPU-and GPU-based cracking. After reviewing all the options, I decided on theHashcat family of cracking tools available at http://hashcat.net. On thesite, you will see that a number of different tools are available. Atfirst glance, it can be a bit confusing, as you can choose from hashcat,oclHashcat, oclHashcat-plus, oclHashcat-lite and even software calledmaskprocessor. Each program has its purpose though, depending on whatyou intend to do.

hashcat:

CPU-based, so slower than the GPU-based software.

Supports the widest range of hashing algorithms.

oclHashcat:

GPU-based password cracker.

Supports a moderate number of hashing algorithms.

Built-in support for dictionary, brute-force and mask attacks.

oclHashcat-plus:

GPU-based.

Supports the most hashing algorithms of the GPU-based hashcat crackers.

Optimized for dictionary attacks against multiple hashes.

Can support dictionary input from a pipe, so brute-force is possible.

oclHashcat-lite:

GPU-based.

Optimized for attacks against a single password hash.

Fastest of the hashcat family, but with the most-limited password hash support.

maskprocessor:

Generates dictionaries based on patterns you supply.

Not a password cracker in its own right, but can pipe output tooclHashcat-plus for a brute-force attack.

Even with the above lists, it may not always be clear which software touse. Basically, it comes down to what type of password you want to crackand what kind of attack you want to use. The page on hashcat.net devotedto each piece of software provides a list of the hashing algorithms theysupport along with benchmark speeds of how many comparisons they can doper second on different types of hardware. For a given password hash,go through those pages and see which type of Hashcat software supportsyour hash and has the highest benchmarks. Beyond that, use oclHashcat formask or brute-force attacks against multiple hashes, oclHashcat-lite forsingle hashes or oclHashcat-plus if, as was the case with me, it's theonly GPU-accelerated version that supported your hash.

Once you decide which type of Hashcat software to use, installation isrelatively simple, if old-school. Just download the .7z package thatcorresponds to the software, and use the 7za command-line tool (whichshould be packaged for your distribution) to extract it. The softwarewill extract into its own directory that provides 32- and 64-bit versionsfor both Linux and Windows. If you have NVIDIA hardware, you will usethe binaries that begin with cuda; otherwise, you will use the versionsthat start with ocl. The directory also will contain a number of examplehashes and dictionaries and example shell scripts you can use to makesure your libraries and drivers are in place. For instance, here's theexample provided with the oclHashcat-plus software for cracking a phpasshash on a 64-bit system:cat example.dict

View the Original article

Calculating Day of the Week

May 30, 2012 By Dave Taylor  inHOW-TOsprogramming

For those of you playing along at home, you'll recall that our intrepidhero is working on a shell script that can tell you the most recent yearthat a specific date occurred on a specified day of the week—for example,the most recent year when Christmas occurred on a Thursday.

There are, as usual, nuances and edge cases that make this calculation abit tricky, including the need to recognize when the specified date hasalready passed during the current year, because if it's July andwe're searching for the most recent May 1st that was on a Sunday,we'd miss 2011 if we just started in the previous year.

In fact, as any software developer knows, the core logic of your program isoften quite easy to assemble. It's all those darn corner cases, thoseodd, improbable situations where the program needs to recognize and respondproperly that makes programming a detail-oriented challenge. It can be fun,but then again, it can be exhausting and take weeks of debugging to ensureexcellent coverage.

That's where we are with this script too. On months where the first dayof the month is a Sunday, we're already set. Give me a numeric date, andI can tell you very quickly what day of the week it is. Unfortunately,that's only 1/7th of the possible month configurations.

What DOW Is That DOM?

For purposes of this discussion, let's introduce two acronyms: DOM isDay Of Month, and DOW is Day Of Week. May 3, 2011, has DOM

View the Original article

Seamlessly Extending IRC to Mobile Devices

Jun 06, 2012 By Bill Childers  inHOW-TOs

Internet Relay Chat (IRC) is one of the older real-time communicationsmethods still in active use on the Internet. Due to its popularity,flexibility and cross-platform nature, it still has a very vibrant userbase today. Personally, I've been on IRC since the late 1990s, and it'sbeen very useful and lots of fun—particularly the #linuxjournal roomon Freenode—stop in sometime!

Drawbacks to IRC

As great as IRC can be, there's one thing about it that's always botheredme, and it's something that Jabber got right—the concept of a resourceand priority. In IRC, when you log in as whatever user you are, you can'tlog in as that user again from another machine. Jabber allows multipleuser logins, so long as the "resource" is different, and it'll routeyour messages to the client with the lowest priority. IRC doesn't evenhave that concept, so once you log in on your desktop, you have to log inas another user if you want to log in on your laptop, which results inlots of logins like "WildBill

View the Original article

Rock Out with Your Console Out

Jun 11, 2012 By Rebecca Chapnik  inAudio/VideoHOW-TOsmusic Playing and managing your music in text mode.

Some of you probably have played audio files from the terminal withone-line commands, such as play, or even used the command line to opena playlist in a graphical music player. Command-line integration is oneof the many advantages of using Linux software. This is an introductionfor those who want the complete listening experience—browsing, managingand playing music—without leaving the text console.

Thanks to the Ncurses (New Curses) widget library, developers can designtext user interfaces (TUIs) to run in any terminal emulator. An Ncursesapplication interface is interactive and, depending on the application,can capture events from keystrokes as well as mouse movements andclicks. It looks and works much like a graphical user interface, exceptit's all ASCII—or perhaps ANSI, depending on your terminal. If you'veused GNU Midnight Commander, Lynx or Mutt, you're already familiarwith the splendors of Ncurses.

An intuitive interface, whether textual or graphical, is especiallyimportant in a media player. No one wants to sift through a long manpage or resort to Ctrl-c just to stop an annoying song fromplaying on repeat, and most users (I'm sure some exceptions exist amongLinux Journal readers) don't want to type out a series of commands justto ls the songs in an album's directory, decide which oneyou want to hear and play it, and thenplay asong in a different directory. If you've ever played music with a purelycommand-line application, such as SoX, you know what I'm talkingabout. Sure, a single command that plays a file is quite handy; thisarticle, however, focuses on TUI rather than CLI applications. For manytext-mode programs, Ncurses is the window (no pun intended) to usability.

Note to developers: if you want to write a console music player, takeadvantage of the Curses Development Kit (CDK), which includes severalready-made widgets, such as scrolling marquees and built-in file browsing.

Now, on to the music players!

Mp3blaster

Mp3blaster was the first console music player I ever used. That wasin 2007, by which time it already was a mature and full-featuredapplication. Its history actually dates back to 1997, before themainstream really had embraced the MP3 format, let alone the idea of anattractive interface for controlling command-line music playback. Backthen, it was humbly known as "Mp3player".

Despite the name, Mp3blaster supports several formats besidesMP3s. Currently, these include OGG, WAV and SID. Keep an eye out forFLAC support in the future, as it is on the to-do list in the latestsource tarball.

One nice feature of Mp3blaster is the top panel showing important keyboardshortcuts for playlist management. You can scroll through this list using

View the Original article

Ahead of the Pack: the Pacemaker High-Availability Stack

Jun 18, 2012 By Florian Haas  inHOW-TOsSysAdmin A high-availability stack serves one purpose: through aredundant setup of two or more nodes, ensure service availability andrecover services automatically in case of a problem. Florian Haasexplores Pacemaker, the state-of-the-art high-availability stack onLinux.

Hardware and software are error-prone. Eventually, a hardware issue orsoftware bug will affect any application. And yet, we're increasinglyexpecting services—the applications that run on top of ourinfrastructure—to be up 24/7 by default. And if we're notexpecting that, our bosses and our customers are. What makesthis possible is a high-availability stack: it automatically recoversapplications and services in the face of software and hardware issues,and it ensures service availability and uptime. The definitiveopen-source high-availability stack for the Linux platform builds uponthe Pacemaker cluster resource manager. And to ensure maximumservice availability, that stack consists of four layers: storage,cluster communications, resource management and applications.

Cluster Storage

The storage layer is where we keep our data. Individual cluster nodesaccess this data in a joint and coordinated fashion. There are twofundamental types of cluster storage:

Single-instance storage is perhaps the more conventional form ofcluster storage. The cluster stores all its data in one centralizedinstance, typically a volume on a SAN. Access to this data is eitherfrom one node at a time (active/passive) or from multiple nodessimultaneously (active/active). The latter option normally requiresthe use of a shared-cluster filesystem, such as GFS2 or OCFS2. Toprevent uncoordinated access to data—a sure-fire way of shreddingit—all single-instance storage cluster architectures require theuse of fencing. Single-instance storage is very easy to set up,specifically if you already have a SAN at your disposal, but it has avery significant downside: if, for any reason, data becomesinaccessible or is even destroyed, all server redundancy in yourhigh-availability architecture comes to naught. With no data to serve,a server becomes just a piece of iron with little use.

Replicated storage solves this problem. In this architecture, thereare two or more replicas of the cluster data set, with each clusternode having access to its own copy of the data. An underlyingreplication facility then guarantees that the copies are exactlyidentical at the block layer. This effectively makes replicatedstorage a drop-in replacement for single-instance storage, albeit withadded redundancy at the data level. Now you can lose entirenodes—with their data—and still have more nodes to fail over to. Severalproprietary (hardware-based) solutions exist for this purpose, but thecanonical way of achieving replicated block storage on Linux is theDistributed Replicated Block Device (DRBD). Storage replication also may happen at the filesystem level, with GlusterFS and Ceph being themost prominent implementations at this time.

Cluster Communications

The cluster communications layer serves three primary purposes: itprovides reliable message passing between cluster nodes, establishesthe cluster membership and determines quorum. The default clustercommunications layer in the Linux HA stack is Corosync, which evolvedout of the earlier, now all but defunct, OpenAIS Project.

Corosync implements the Totem single-ring ordering and membershipprotocol, a well-studied message-passing algorithm with almost 20years of research among its credentials. It provides a secure,reliable means of message passing that guarantees in-order deliveryof messages to cluster nodes. Corosync normally transmits clustermessages over Ethernet links by UDP multicast, but it also can useunicast or broadcast messaging, and even direct RDMA over InfiniBandlinks. It also supports redundant rings, meaning clusters can use twophysically independent paths to communicate and transparently failover from one ring to another.

Corosync also establishes the cluster membership by mutuallyauthenticating nodes, optionally using a simple pre-shared keyauthentication and encryption scheme. Finally, Corosync establishesquorum—it detects whether sufficiently many nodes have joined thecluster to be operational.

Cluster Resource Management

In high availability, a resource can be something as simple as an IPaddress that "floats" between cluster nodes, or something as complexas a database instance with a very intricate configuration. Putsimply, a resource is anything that the cluster starts, stops,monitors, recovers or moves around. Cluster resource management iswhat performs these tasks for us—in an automated, transparent,highly configurable way. The canonical cluster resource manager inhigh-availability Linux is Pacemaker.

Pacemaker is a spin-off of Heartbeat, the high-availability stackformerly driven primarily by Novell (which then owned SUSE) andIBM. It re-invented itself as an independent and much morecommunity-driven project in 2008, with developers from Red Hat, SUSEand NTT now being the most active contributors.

Pacemaker provides a distributed Cluster Information Base (CIB) inwhich it records the configuration and status of all clusterresources. The CIB replicates automatically to all cluster nodes fromthe Designated Coordinator (DC)—one node that Pacemakerautomatically elects from all available cluster nodes.

The CIB uses an XML-based configuration format, which in releasesprior to Pacemaker 1.0 was the only way to configure the cluster—something that rightfully made potential users run awayscreaming. Since these humble beginnings, however, Pacemaker has growninto a tremendously useful, hierarchical, self-documenting text-basedshell, somewhat akin to the "virsh" subshell that many readers will befamiliar with from libvirt. This shell—unimaginatively called"crm"by its developers—hides all that nasty XML from users and makes thecluster much simpler and easier to configure.

In Pacemaker, the shell allows us to configure cluster resources—nosurprise there—and operations (things the cluster does withresources). Besides, we can set per-node and cluster-wide attributes,send nodes into a standby mode where they are temporarily ineligiblefor running resources, manipulate resource placement in the cluster,and do a plethora of other things to manage our cluster.

Finally, Pacemaker's Policy Engine (PE) recurrently checks the clusterconfiguration against the cluster status and initiates actions asrequired. The PE would, for example, kick off a recurring monitoroperation on a resource (such as, "check whether this database isstill alive"); evaluate its status ("hey, it's not!"); take intoaccount other items in the cluster configuration ("don't attempt torecover this specific resource in place if it fails more than three timesin 24 hours"); and initiate a follow-up action ("move thisdatabase to a different node"). All these steps are entirely automaticand require no human intervention, ensuring quick resource recoveryand maximum uptime.

At the cluster resource management level, Pacemaker uses an abstractmodel where resources all support predefined, generic operations (suchas start, stop or check the status) and produce standardized returncodes. To translate these abstract operations into something that isactually meaningful to an application, we need resource agents.

Resource Agents

Resource agents are small pieces of code that allow Pacemaker tointeract with an application and manage it as a clusterresource. Resource agents can be written in any language, with thevast majority being simple shell scripts. At the time of this writing, morethan70 individual resource agents ship with the high-availability stackproper. Users can, however, easily drop in custom resource agents—akey design principle in the Pacemaker stack is to make resourcemanagement easily accessible to third parties.

Resource agents translate Pacemaker's generic actions into operationsmeaningful for a specific resource type. For something as simple as avirtual "floating" IP address, starting up the resource amounts toassigning that address to a network interface. More complex resourcetypes, such as those managing database instances, come with much moreintricate startup operations. The same applies to varyingimplementations of resource shutdown, monitoring and migration: allthese operations can range from simple to complex, depending onresource type.

Highly Available KVM: a Simple Pacemaker Cluster

This reference configuration consists of a three-node cluster withsingle-instance iSCSI storage. Such a configuration is easily capableof supporting more than 20 highly available virtual machine instances,although for the sake of simplicity, the configuration shown here includes only three. You can complete this configuration on any recentLinux distribution—the Corosync/Pacemaker stack is universallyavailable on CentOS 6,

View the Original article

Hack and / - Password Cracking with GPUs, Part III: Tune Your Attack

Jul 09, 2012 By Kyle Rankin  inHOW-TOsSecurity You've built the hardware, installed the software and cracked somepasswords. Now find out how to fine-tune your attacks.

In the first two parts of this series, I explained what hardware to getand then described how to use the hashcat software suite to performdictionary and brute-force attacks. If you have been following along,by this point, you should have had plenty of time to build your ownpassword-cracking hardware and experiment with oclhashcat. As I mentionedin my last column, password cracking is a pretty dense subject. In thisarticle, I finish the series by describing how to tune andrefine your attacks further so they can be more effective.

Use More GPU Cycles

The first area where you can fine-tune your attacks is to put more or lessload on your GPU. The -n option, when passed to oclhashcat, changes howmuch of your GPU will be used for an attack. The documentation says thatthis value is set to 80 by default; however, on my computer, it seemed likethe default was set closer to 40. When I first ran a brute-force attack,the output told me I was using around 70–80% of my GPU. Once I added-n 80to my oclhashcat command, I noticed I was using between 96–98% of my GPU andhad added an extra 40,000 comparisons per second:/path/to/mp32.bin -1 ?d?l?u ?1?1?1?1?1?1

View the Original article

Calculating Day of the Week, Finally

Jul 25, 2012 By Dave Taylor  inHOW-TOsprogramming

As with many of the challenges we tackle, the latest project has sprawled across more articles than I ever expected when I first received the query from a reader. The question seems reasonably simple: given a month, day number and day of the week, calculate the most recent year that matches those criteria.

There are some obscure and complex formulas for doing just this, but instead, I decided it'd be interesting basically to loop backward from the current year for the month in question, parsing and analyzing the output of the handy cal program.

The real challenge has been that the cal program never really was designed to produce easily parsed output, so figuring out the day of the week (DOW, as we've been abbreviating it) involves basically counting the number of leading spaces or otherwise compensating for an average month where the first day starts mid-week, not neatly on Sunday.

An algorithmic-friendly version of cal would have output where days prior to the first day of the month would be output optionally as zeros or underscores, making this oodles easier. But it isn't, so we have to compensate.

Figuring the Day of the Week

Last time, we wrapped up with a shell function that expected the day, month and year as arguments and returned the day of the week of that particular date in that month on that year. In other words, 16 May, 2011, occurs on a Monday:

May 2011Su Mo Tu We Th Fr Sa1 2 3 4 5 6 78 9 10 11 12 13 1415 16 17 18 19 20 2122 23 24 25 26 27 2829 30 31

The actual return value of the function in this instance is 2, so 1

View the Original article

Android Programming with App Inventor

Aug 15, 2012 By Amit Saha  inandroidHOW-TOsMobile Drag and drop your way to Android programming.

MIT App Inventor, re-released as a beta service (as of March 5,2012) by the MIT Center for Mobile Learning after taking over the project from Google,is a visual programming language for developing applications forthe Android mobile computing platform. It is based on the concept ofblocks, and applications are designed by fitting together blocks of codesnippets. This may sound like a very childish way of programming, especially forseasoned readers of Linux Journal. But then again,App Inventor will tickle the child programmer in you and make you chuckleat the ease with which you can develop applications for your Androiddevice. In this article, I describe how to use the camera on the Androiddevice, develop e-mailand text-messaging-based applications and also show how to use locationsensors to retrieve your current geographical location. Let's get started.

Getting Started

App Inventor has minimum setup requirements and is completelybrowser-based. You need a working Java installation on your system,as it uses Java Web Start for its functioning. Point your browserto http://appinventor.mit.edu, and once you sign in with your Googleaccount, you should see a screen as shown in Figure 1. This is calledthe Projects Page where you can see your existing projects and createnew ones.



View the Original article

Interfacing Disparate Systems

Sep 04, 2012 By James Litton  inHOW-TOs

When hearing the word interface, most people probably think of a Graphical UserInterface or a physical hardware interface (serial, USB). If you dabble inscripting or are a serious developer, you, no doubt, are familiar withthe concept of software interfaces as well. Occasionally, the need arisesto integrate disparate systems where an interface doesn't already exist,but with a little ingenuity, an interface can be created to bridge thedisparity and help you meet your specific needs.

I have an extensive home automation implementation I developedover the years. As I knocked out the "easy" integrations, I eventually cameto a point of wanting to integrate systems that are not homeautomation-friendly. An example of this is my alarm system. Excellent alarmpanels exist on the market that make integration a cinch, but I already had afully functional alarm system and was determined to integrate it into myhome automation setup rather than replace it.

My first inclination was to hack a keypad or build my own hardwareinterface that would allow me to capture status information. Both of those approachesare viable, but as I thought about other options, I realized I couldintegrate my proprietary alarm system into my home automation systemwithout even cracking open the alarm panel.

Before I reveal the details of how I achieved the outcome I wanted,let me first lay out my integration goals. Although it would be nice tocapture sensor data from the alarm system, in my case, it was totallyunnecessary as the only data that might be helpful was motion sensor dataor specific zone faults. Because I already have numerous motion sensorsinstalled that are native to my home automation build, and because faultdata wasn't a factor in my immediate integration requirements, I concludedthat I needed to know only if my alarm was "armed" or"unarmed". Knowing thestate of the alarm system helps me make my home automation systemsmarter. An example of this added intelligence might be to change thethermostat setting and turn off all lights if the alarm state changes toarmed. Another example might be to turn on all of the lights in the housewhen the garage door opens after dark and the alarm is armed.

As I thought through the scenarios a bit further, I quickly realized Ineeded a bit more data. Depending on how an alarm system is installed andthe practices of its users, there may or may not be multiple armed statesthat need to be considered. In my case, I have two separate armed states.One state is "armed away" (nobody home) and the other is"armed stay"(people are in the house). It wouldn't make sense to turn off all ofthe lights in the house, for example, if the system was set to armedstay, but that would make perfect sense if it were set to armed away. AsI continued to think through my needs, I concluded that knowing whether thesystem was armed away, armed stay or unarmed was all I needed to addsignificantly greater intelligence to my home automation scenes.

Once I had a firm grasp of my needs, I realized my alarm-monitoring company already was providing me with some of the data Iwas looking for in the form of e-mail messages. Every time the alarm wasarmed or disarmed,I would get ane-mail message indicating the state change. I had been using this feature for awhile, as it was helpful to know when my kids arrived home or when theyleft for school in the morning. Because I had been using this notificationmechanism for some time, I also knew it to be extremely timely and reliable.

Because I was getting most of the data I needed, I started thinking about waysI might be able to leverage my e-mail system as the basis for aninterface to my proprietary alarm panel. In days gone by I had usedprocmail to process incoming e-mail, so I knew it would be fairly easy toinject a script into the inbound mail-processing process to scan contentand take action.

Before I started down the path of writing a script and figuring out how tomake my e-mail system run inbound mail through it, I needed to deal with theshortcoming I had with status notifications. You may have noticed that Isaid my alarm monitoring company was sending me two status notifications:one for armed and one for unarmed.I was fairly certain that an additional relay could be configured so the folksat the companycould notify me with the two variations of "armed" that Ineeded to proceed, so I called them to discuss the matter, and sure enough, theywere able to make the change I requested. In fairly short order, I wasreceiving the three notifications that I wanted.

With the notifications in place, I could start the task of creating ascript to scan incoming mail.

To keep things as simple as possible, I decided to write the script inBash.

To follow this example, the first thing you need to do is capture all of the data being piped intothe script and save it for processing:#!/bin/bashwhile read ado echo "$a" >>

View the Original article

Making Lists in Scribus

Sep 10, 2012 By Bruce Byfield  inHOW-TOsScribusScribus software

You might as well know from the start: Making bulleted or numbered lists in Scribus isn't as easy as in the average word processor. In fact, compared to LibreOffice, Scribus as installed is downright primitive in the way it handles lists. You can pull a script off the Internet to automate to an extent, but chances are you'll have to tweak it before it does exactly what you want.

Creating a list manually is straightforward, but tedious. You want numbers? Insert them one at a time, and repeat whenever you need to renumber. You want bullet points? Go to Insert -> Character -> Bullet, or, if you want a character other than a plain bullet, to Insert -> Glyph -> Hide/Show Enhanced Palette, and copy and paste as necessary. Then, in both cases press Tab and start typing.

In both cases, you should create a list paragraph style to control the indentations in the list. Go to Edit -> Styles -> New -> Paragraph Style -> Tabulators and Indentation, and adjust the First Line Indent and Left Indent fields, using both positive and negative numbers, until you have the spacing you want. Usually, you want a negative indent for the first line, so that the number or bullet is indented from the margin, and a positive number for the left indent. The trickiest part is adjusting the left indent so that the subsequent lines align with the first -- a matter of trial and error that varies with the font.



View the Original article

An Introduction to GCC Compiler Intrinsics in Vector Processing

Sep 21, 2012 By George Koharchik and Kathy Jones  inHOW-TOsMath

Speed is essential in multimedia, graphics and signalprocessing. Sometimes programmers resort to assembly language to getevery last bit of speed out of their machines. GCC offers an intermediatebetween assembly and standard C that can get you more speed and processorfeatures without having to go all the way to assembly language: compilerintrinsics. This article discusses GCC's compiler intrinsics, emphasizingvector processing on three platforms: X86 (using MMX, SSE and SSE2);Motorola, now Freescale (using Altivec); and ARM Cortex-A (using Neon).We conclude with some debugging tips and references.

Download the sample code for this articlehere: http://www.linuxjournal.com/files/linuxjournal.com/code/11108.tar

So, What Are Compiler Intrinsics?

Compiler intrinsics (sometimes called "builtins") are like thelibrary functions you're used to, except they're built in to thecompiler. They may be faster than regular library functions (thecompiler knows more about them so it can optimize better) or handlea smaller input range than the library functions. Intrinsics alsoexpose processor-specific functionality so you can use them as anintermediate between standard C and assembly language. This gives youthe ability to get to assembly-like functionality, but still let thecompiler handle details like type checking, register allocation,instruction scheduling and call stack maintenance. Some builtins areportable, others are not--they are processor-specific. You can findthe lists of the portable and target specific intrinsics in the GCCinfo pages and the include files (more about that below). Thisarticle focuses on the intrinsics useful for vector processing.

Vectors and Scalars

In this article, a vector is an ordered collection of numbers, likean array. If all the elements of a vector are measures of the samething, it's said to be a uniform vector. Non-uniform vectors haveelements that represent different things, and their elements have to beprocessed differently. In software, vectors have their own types andoperations. A scalar is a single value, a vector of size one. Codethat uses vector types and operations is said to be vector code. Codethat uses only scalar types and operations is said to be scalar code.

Vector Processing Concepts

Vector processing is in the category of Single Instruction, MultipleData (SIMD). In SIMD, the same operation happens to all the data (thevalues in the vector) at the same time. Each value in the vector iscomputed independently. Vector operations include logic and math. Mathwithin a single vector is called horizontal math. Mathbetween two vectors is called vertical math.

Instead of writing: 10 x 2

View the Original article

The Sysadmin's Toolbox: iftop

The Sysadmin's Toolbox: iftop Sep 25, 2012 By Kyle Rankin  inHOW-TOsSysAdmin Who's using up all the bandwidth, and what are they doing? Useiftop to find out.

Longtime system administrators often take tools for granted that they'veused for years and assume everyone else has heard of them. Of course, new sysadmins jointhe field every day, and even seasoned sysadmins don't all use thesame tools. With that in mind, I decided to write a few columns whereI highlight some common-but-easy-to-overlook tools that make life as asysadmin (and really, any Linux user) easier. My last article covered sar,a tool you can use to collect and view system metrics over time. This time,I discuss a program that's handy for viewing real-time networkperformance data: iftop.

Anyone who's had to use a network at a conference has experiencedwhat happens when there just isn't enough network bandwidth to goaround. While you are trying to check your e-mail, other people arestreaming movies and TV shows, downloading distribution install disks,using p2p networks, upgrading their distributions or watching cat videoson YouTube. Although it's certainly frustrating to try to use one of thosenetworks, imagine how frustrating it would be to be the admin in chargeof that network. Whether you run a conference network, a local officenetwork or even a Web server at your house, it can be really nice toknow what is using up all of your bandwidth.

iftop is a Linux command-line program designed to give you live statisticsabout what network connections use the most bandwidth in a nice graphicalform. As you may realize from the name, iftop borrows a lot of ideas fromthe always-useful load troubleshooting tool top. Like top, iftop updatesautomatically every few seconds, and like top, by default, it sorts theoutput you see by what's using the most resources. Where top is concernedwith processes and how much CPU and RAM they use, iftop is concerned withnetwork connections and how much upload and download bandwidth they use.

Even though iftop is packaged for both Red Hat- and Debian-baseddistributions, it's probably not installed by default, so you will needto install the package of the same name. In the case of Red Hat-baseddistributions, you might have to pull it down from a third-partyrepository. Once it's installed, the simplest way to get started is just torun iftop as the root user. iftop will locate the first interface itcan use and start listening in on the traffic and display output similarto what you see in Figure 1. To close the program, press q to quit justlike with top.



View the Original article

PirateBox

The is a device designed to facilitate sharing. There's onecatch, it isn't connected to the Internet, so you need to be close enough toconnect via Wi-Fi to this portable file server. This article outlines theproject and shows how to build your own.

In days of yore (the early- to mid-1990s) those of us using the"Internet", as it was, delighted in our ability to communicate withothers and share things: images, MIDI files, games and so on. These days,although file sharing still exists, that feeling of community has been leechedaway from the same activities, and people are somewhat skeptical ofsharing files on-line anymore for fear of a lawsuit or who's watching.

Enter David Darts, the Chair of the Art Department at NYU. Darts, aware ofthe Dead Drops (http://deaddrops.com) movement, was looking for a way forhis students to be able to share files easily in the classroom. Findingnothing on the market, he designed the first iteration of the .

"Protecting our privacy and our anonymity is closely related to the preservation ofour freedoms."—David Darts

Dead Drops

Dead Drops is an off-line peer-to-peer file-sharing network in public. Inother words, it is a system of USB Flash drives embedded in walls,curbs and buildings. Observant passersby will notice the drop and,hopefully, connect a device to it. They then are encouraged to drop orcollect any files they want on this drive. For more information, commentsand a map of all Dead Drops worldwide, go to http://deaddrops.com.

What Does David Darts Keep on His ?

A collection of stories by Cory Doctorow.

Abbie Hoffman's Steal This Book.

DJ Danger Mouse's The Grey Album.

Girl Talk's Feed the Animals.

A collection of songs by Jonathan Coulton.

Some animations by Nina Paley.

(All freely available and released under some sort of copyleft protection.)

The is a self-contained file-sharing device that is designedto be simple to build and use. At the same time, Darts wanted somethingthat would be private and anonymous.

The doesn't connect to the Internet for this reason. It issimply a local file-sharing device, so the only thing you can do whenconnected to it is chat with other people connected to the box or sharefiles. This creates an interesting social dynamic, because you are forcedto interact (directly or indirectly) with the people connected to the.

The doesn't log any information. "The has no tool to track or identify users. If ill-intentionedpeople—or the police—came here and seized my box, they will never know who usedit", explains Darts. This means the only information stored about anyusers by the is any actual files uploaded by them.

The prototype of the was a plug computer, a wireless routerand a battery fit snugly into a metal lunchbox. After releasing thedesign on the Internet, the current iteration of the (and theone used by Darts himself) is built onto a Buffalo AirStation wirelessrouter (although it's possible to install it on anything running OpenWRT),bringing the components down to only the router and a battery. One branchof the project is working on porting it to the Android OS, and anotheris working on building a using only open-source components.

How to Build a

There are several tutorials on the Web site(http://wiki.daviddarts.com/_DIY) on how to set up a based on what platform you are planning on using. The simplest (andrecommended) way of setting it up is on an OpenWRT router. For thepurpose of this article, I assume this is the route you aretaking. The site suggests using a TP-Link MR3020 or a TP-Link TL-WR703N,but it should work on any router with OpenWRT installed that also hasa USB port. You also need a USB Flash drive and a USB battery(should you want to be fully mobile).

Adding USB Support to OpenWRT

USB support can be added by running the following commands:opkg updateopkg install kmod-usb-uhciinsmod usbcoreinsmod uhciopkg install kmod-usb-ohciinsmod usb-ohci

Assuming you have gone through the initial OpenWRT installation (I don'tgo into this process in this article), you need to make someconfiguration changes to allow your router Internet access initially(the software will ensure that this is locked down later).

12next

View the Original article

Extreme Graphics with Extrema

Nov 12, 2012 By Joey Bernard  inGraphicsHOW-TOsReviews

High-energy physics experiments tend to generate huge amounts ofdata. While this data is passed through analysis software, very oftenthe first thing you may want to do is to graph it and see what itactually looks like. To this end, a powerful graphing and plotting programis an absolute must. One available package is called Extrema(http://exsitewebware.com/extrema/index.html). Extremaevolved from an earlier software package named Physica. Physicawas developed at the TRIUMF high-energy centre in British Columbia,Canada. It has both a complete graphical interface for interactive usein data analysis and a command language that allows you to process largerdata sets or repetitive tasks in a batch fashion.

Installing Extrema typically is simply a matter of using yourdistribution's package manager. If you want the source, it is availableat the SourceForge site (http://sourceforge.net/projects/extrema). At SourceForge, therealso is a Windows version,in case you are stuck using such an operating system.

Once it is installedon your Linux box, launching it is as simple as typing inextrema andpressing Enter. At start up, you should see two windows: a visualizationwindow and an analysis window (Figure 1). One of the most important buttonsis the help button. In the analysis window, you can bring it up by clickingon the question mark (Figure 2). In the help window, you can get moredetailed information on all the functions and operators availablein Extrema.



View the Original article

Public Art with Augmented Reality and Blender

Augmented reality artist/developer, Nathan Shafer, has plans to illustrate the history of Exit Glacier in Seward, Alaska via 3D modeling using popular open source modeling software, Blender. The finished result will allow young scientists, school children, and other visitors to use mobile devices equipped with Shafer’s app to view reconstructions of the 5 former termini that were present before the significant, visible shrinkage that illustrates the larger issue of glacial recession.

In Shafer’s words:

I make digital pieces and upload them into the real world. I am modeling in Blender, because it has proven to be the most dynamic and efficient 3D program for what I do.  In AR the name of the game is efficiency.  5000 polygons is about the max for any model in a browser.  I am building some models that are literally 3 miles long, so I have to condense.  I am using geodata provided by Kenai Fjords National Park to generate NURBS that will approximate the height of the glacier, sort of give me a box to work in.  The look of the glacier is being rendered using a mix of plug-ins and lightning effects, which oddly enough never look the way you want them to when they translate into an AR browser and get the real world all around them.  The model is being skinned with actual photos of the glacier using the node-based compositor and textures, after that is done, we want a scanline rendering that runs an algorithm calculating the actual sun over the virtual model (hopefully in real time, which is very hard).

There is going to be some Python scripting for interactive features on the models, but I am playing with the notion of using more of Blender’s gaming features, which I have never used before on this. In the finished mobile AR app, dates will be displayed floating in situ, in virtual mobile space, when the dates are touched, the terminus from that era will appear.   Most other scripting will be done in Javascript, and AR usually requires JSON response on the back end, which we are using a Linux server to manage.

Shafer is using Kickstarter, the popular fundraising site to fund his project, and you can read more about it there.

Or check out this video:

 

______________________

Katherine Druckman is webmistress at LinuxJournal.com. You might find her chatting on the IRC channel or on Twitter.

Login to save this as favorite

View the Original article

OSI Announces It Will Open the Organization to Individual Members

Wednesday, July 17, at the O'Reilly Open Source Conference in Portland, Oregon, the Open Source Initiative (OSI) announced a new initiative to open up the organization to individual members. Historically, the organization was open only to affiliate members, so this announcement marks a significant new direction for the open-source advocate. The shift represents a move from a governance model of volunteer and self-appointed directors to one driven by members.

The OSI's high-level objectives in making this change are to provide a broad meeting place for everyone who shares an interest in open-source software, with the continuing aim of strengthening the OSI so that it can fulfill its goals more effectively. These goals include safely maintaining the Open Source Definition, managing the approval of open-source licenses and publicly supporting the widespread adoption and use of open-source software.

The OSI believes that having a large global membership base is an excellent way to achieve those goals. It also believes that its individual members will be able to advocate for open source in their communities and organizations. Combining individual members with OSI Affiliate organizations, the OSI hopes this new focus will help make the OSI the strongest voice for open source around the world.

The new individual membership level is for people who want to support the mission of the OSI, which is to educate about and advocate for the benefits of open source and to build bridges among different constituencies in the Open Source community. Anyone can join the OSI for $40; however, because the OSI is based in the US, it cannot accept contributions from certain countries where economic embargoes are in place. If you are in such a country, please contact the OSI, as it is willing to provide a complimentary membership. The OSI believes it is at a transformative stage in its history. In addition to supporting the OSI financially, individual members can help define the ways the OSI achieves its mission.

Here is the link to the join page: http://www.opensource.org/join.

______________________

Login to save this as favorite

View the Original article

Texas Linux Fest is This Week - Win a Free Pass

Texas Linux Fest begins this Friday, August 3rd, and there's still plenty of time to register. Or, you can enter to win one of five free passes. You have until 3pm tomorrow, July 31 to enter, so hurry!  We'll post the winners tomorrow afternoon, so you'll still have time to register if you don't win.

We hope you'll join us in San Antonio, and drop by the Linux Journal table on Saturday. This is shaping up to be a fun event! 

From the official schedule announcement:

 

With the 32 sessions now set and the exhibitors ready to go, the schedule for Texas Linux Fest 2012 has been finalized, and it highlights a wide variety of speakers and topics -- as well as a wide range of exhibitors -- for the San Antonio event Aug. 3-4 at the Norris Conference Center.

 

A full schedule can be found at http://2012.texaslinuxfest.org/program and registration, lodging discounts and other pertinent information can be found at the bottom of this e-mail.

 

Friday’s schedule includes a Chef 101 session, where Opscode instructors will present free training followed by an afternoon hackathon. Zenoss also will be holding a session on providing the who, what, where and how of the Zenoss Open Source monitoring solution. The BSD Certification Group will offer the BSDA certification exam on Friday afternoon to attendees of Texas Linux Fest.

 

Saturday’s schedule kicks off with the Texas Linux Fest 2012 keynote presentation,

View the Original article

The Open Source Office Software Sector Heats Up

The world of LibreOffice and OpenOffice(.org) has been heating up recently with several exciting and, at times, bewildering developments. The Document Foundation remains very active as is LibreOffice development, but Oracle has given up on OpenOffice and slapped LibreOffice in the face by giving it to Apache. Perhaps the most important announcement was the release of LibreOffice 3.4.0.

The recent release of LibreOffice 3.4 demonstrates the very philosophical differences in community projects and those stifled by commercial interests. LibreOffice development has been happening at an unprecedented pace while OpenOffice lagged behind and lost many of its previous users. Even under Sun development was tightly controlled, but Oracle increased the bonds. In contrast, according to the release announcement, LibreOffice now has 120 happy developers committing approximately 20 changes per day. Cédric Bosdonnat puts the number of contributors at 223. Italo Vignoli is quoted as saying, "We care for our developers, and it shows."

Just before LibreOffice 3.4 was released Oracle announced that it was donating OpenOffice to the Apache Software Foundation. Pundits have speculated all around the spectrum of how that will affect the office suite with some thinking it will certainly benefit while others think it will most likely wane even further. The Document Foundation expressed disappointment that a reunification of the two projects will probably not occur but offered their best wishes for OpenOffice. They were upbeat about including OpenOffice code since the Apache license is compatible with the GNU Lesser General Public License under which LibreOffice is released. Given these facts, "the event is neutral for The Document Foundation."

What's New in LibreOffice 3.4?

Most folks just want to hear of the pretty and handy features visible in their daily work, but underestimating the impact of code clean-up is a disservice to developers. These code clean-ups are what leads to faster operation and fewer crashes. Michael Meeks calls this "ridding ourself of sillies." One area in which these two world merge comes in an example given by Meeks: icons. He said, "OO.o had simply tons of duplication, of icons everywhere" - approximately 150 duplicated missing icons. He added, "All of that nonsense is now gone." A font memory leak has been fixed and rarely used encodings have been moved out into their own library. This "reduces run-time memory consumption" and shrinks download size.



View the Original article

Bitcoin - I Hardly Knew Ya

I first heard of Bitcoin when the Free Software Foundation announced they would start accepting it for donations. Before long another story about Bitcoin appeared in my news feeds. Then another. And another. Then the new currency got a black eye, and finally, the Electronic Frontier Foundation stopped accepting donations of it. You know something is on very shaky ground when a non-profit will no longer accept donations of it.

Bitcoin began life just two short years ago as what some may characterize as an experiment in a new currency. It was to be one that wasn't tied to any country currency and demonstrated the characteristics of rising or decreasing in value somewhat like a stock on an exchange. Perhaps the best advantage of using Bitcoin currency was the ability to conduct purchases anonymously.

That latter point was driven home in an article on Gawker.com highlighting how many Bitcoin users were using their anonymous currency to purchase illegal drugs and other outlawed commodities such as incandescent light bulbs. That eye-opening article lead to an increase in the exchange rate of Bitcoin units, but also brought it to the attention to some of the same lawmakers that outlawed the incandescent light bulbs. With legitimate concerns such as illegal drug and prostitution activity, these senators and key justice department figures are seeking to shut down the Silk Road, the anonymous site used to sell and purchase illegal goods, and investigate how to castrate Bitcoin. Bitcoin is used for numerous legal tranactions as well, much like Bittorrent, such as donations to non-profits or paying for IT services. Some think of it as an investment. This chart shows the rise and fall of Bitcoin exchange rates coinciding with the Gawker.com story and subsequent hack and theft of Bitcoin user account data.

On June 19 hackers broke into and stole the database of accounts from Mt. Gox where most of the Bitcoin activity originated. When the news broke, many Bitcoin holders dumped their "coins" and combined with the flood of stolen coins sent the exchange rate back down to almost nothing. Mt. Gox immediately locked their system down. Rollbacks are being implemented, and accounts are being restored. Mark Karpeles, a Mt. Gox spokesman, posted that exchange rates should also be back to approximately $17.50 when everything is back to normal.

Not just because of the hack into Mt. Gox but for several reasons, the Electronic Frontier Foundation decided to no longer accept Bitcoin donations to help fund their civil liberties group. Among the other issues cited stem from legalities of using the currency and trying to convert Bitcoins into real cash. The EFF also did not want to be construed as endorsing Bitcoin or Mt. Gox.

As of this writing, the Free Software Foundation is still accepting Bitcoin donations. Nevertheless, the future of Bitcoin is very uncertain. Even if folks can figure out where they might use this obscure currency, can get over their fear of volatility, and discover how to purchase or earn some Bitcoinage; those in power will surely find a way to destroy anything they don't understand or can't control and tax.

______________________

Susan Linton is a Linux writer and the owner of tuxmachines.org.

Login to save this as favorite

View the Original article

Mageia 2 Release Details Revealed

After an extensive discussion with the community on the Mageia developmental mailing list, Anne Nicolas revealed the results concerning Mageia release and support cycles as well as the release schedule for Mageia 2. The consensus was to use basically the same cycle used in Mageia 1.

Three proposals where given for discussion:

Proposal 1:
6 months release cycle -> 12 months life cycle
( Fedora, Ubuntu, Mandriva < 2010.1 && Mandriva !

View the Original article

Contact Us

Contact Us

LibreOffice Developer Glimpse Proves Balance

Florian Effenberger recently posted statistics of the number of developers contributing to the LibreOffice project. Several months ago, Cedric Bosdonnat offered data on the number of contribution and contributors from the various sources. While Effenberger's post provides much less detail, it still provides a glimpse into the composition of the growing community.

According to commit counts it seems 54 developers from Oracle, everybody's favorite bad guy these days, has the highest employee count. This was a full 18% of all commits. As Italo Vignoli explained, "Oracle contributions are related to the OOo code that has been merged with LibreOffice, and in fact the number of commits has decreased dramatically during the last few months. There are, though, some former Oracle developers contributing on a volunteer basis to LibreOffice."

SUSE is next with 20 employees making contributions giving them 6.7% of commits. Known contributors follows with 3% from 9 contributors. Known contributors are those with a history of developing for OpenOffice.org and LibreOffice but not working on the behalf of or representing any known employer.



View the Original article