perl-AI-PSO-0.86-alt1.noarch.rpm


Advertisement

Description

perl-AI-PSO - Module for running the Particle Swarm Optimization algorithm

Distribution: ALT Linux Sisyphus
Repository: Autoimports noarch
Package name: perl-AI-PSO
Package version: 0.86
Package release: alt1
Package architecture: noarch
Package type: rpm
Installed size: 47.01 KB
Download size: 47.01 KB
Official Mirror: ftp.altlinux.org
OF ALGORITHM. Particle Swarm Optimization is an optimization algorithm designed by Russell Eberhart and James Kennedy from Purdue University. The algorithm itself is based off of the emergent behavior among societal groups ranging from marching of ants, to flocking of birds, to swarming of bees. PSO is a cooperative approach to optimization rather than an evolutionary approach which kills off unsuccessful members of the search team. In the swarm framework each particle, is a relatively unintelligent search agent. It is in the collective sharing of knowledge that solutions are found. Each particle simply shares its information with its neighboring particles. So, if one particle is not doing to well (has a low fitness), then it looks to its neighbors for help and tries to be more like them while still maintaining a sense of individuality. A particle is defined by its position and velocity. The parameters a user wants to optimize define the dimensionality of the problem hyperspace. So, if you want to optimize three variables, a particle will be three dimensional and will have 3 values that devine its position 3 values that define its velocity. The position of a particle determines how good it is by a user-defined fitness function. The velocity of a particle determines how quickly it changes location. Larger velocities provide more coverage of hyperspace at the cost of solution precision. With large velocities, a particle may come close to a maxima but over-shoot it because it is moving too quickly. With smaller velocities, particles can really hone in on a local solution and find the best position but they may be missing another, possibly even more optimal, solution because a full search of the hyperspace was not conducted. Techniques such as simulated annealing can be applied in certain areas so that the closer a partcle gets to a solution, the smaller its velocity will be so that in bad areas of the hyperspace, the particles move quickly, but in good areas, they spend some extra time looking around. In general, particles fly around the problem hyperspace looking for local/global maxima. At each position, a particle computes its fitness. If it does not meet the exit criteria then it gets information from neighboring particles about how well they are doing. If a neighboring particle is doing better, then the current particle tries to move closer to its neighbor by adjusting its position. As mentioned, the velocity controls how quickly a particle changes location in the problem hyperspace. There are also some stochastic weights involved in the positional updates so that each particle is truly independent and can take its own search path while still incorporating good information from other particles. In this particluar perl module, the user is able to choose from two implementations of the algorithm. One is the original implementation from *Swarm Intelligence* which requires the definition of a 'random range' to which the two stochastic weights are required to sum. The other implementation allows the user to define the weighting of how much a particle follows its own path versus following its peers. In both cases there is an element of randomness. Solution convergence is quite fast once one particle becomes close to a local maxima. Having more particles active means there is more of a chance that you will not be stuck in a local maxima. Often times different neighborhoods (when not configured in a global neighborhood fashion) will converge to different maxima. It is quite interesting to watch graphically. If the fitness function is expensive to compute, then it is often useful to start out with a small number of particles first and get a feel for how the algorithm converges. The algorithm implemented in this module is taken from the book *Swarm Intelligence* by Russell Eberhart and James Kennedy. I highly suggest you read the book if you are interested in this sort of thing.

Alternatives

Requires

Provides

  • perl(AI/PSO.pm) = 0.860
  • perl-AI-PSO = 0.86-alt1

    Download

    Install Howto

    1. Add the following line to /etc/apt/sources.list:
      rpm [Sisyphus] http://ftp.altlinux.org/pub/distributions/ALTLinux/autoimports/Sisyphus noarch autoimports
      rpm [Sisyphus] http://ftp.altlinux.org/pub/distributions/ALTLinux/autoimports/Sisyphus noarch autoimports
      
    2. Update the package index:
      # sudo apt-get update
    3. Install perl-AI-PSO rpm package:
      # sudo apt-get install perl-AI-PSO

    Files

    • /usr/share/doc/perl-AI-PSO-0.86/Changes
    • /usr/share/doc/perl-AI-PSO-0.86/README
    • /usr/share/doc/perl-AI-PSO-0.86/examples/
    • /usr/share/doc/perl-AI-PSO-0.86/examples/NeuralNet/Makefile
    • /usr/share/doc/perl-AI-PSO-0.86/examples/NeuralNet/NeuralNet.cpp
    • /usr/share/doc/perl-AI-PSO-0.86/examples/NeuralNet/NeuralNet.h
    • /usr/share/doc/perl-AI-PSO-0.86/examples/NeuralNet/main.cpp
    • /usr/share/doc/perl-AI-PSO-0.86/examples/NeuralNet/pso.dat
    • /usr/share/doc/perl-AI-PSO-0.86/examples/NeuralNet/pso_ann.pl
    • /usr/share/perl5/AI/PSO.pm
    Advertisement
    Advertisement