MouseTrap Dev Help

(Difference between revisions)
Jump to: navigation, search
(cv 2.4.3)
Line 45: Line 45:
 
                                           This will ask for your git credentials, so have them ready.
 
                                           This will ask for your git credentials, so have them ready.
  
= The MouseTrap Program =
 
 
'''Here is a complete dissection of the hierarchy done by gnome:''' http://gnome-mousetrap.sourcearchive.com/documentation/0.3plus-psvn17-2ubuntu1/main.html
 
 
==Definitions==
 
; Haar wavelet: First proposed by Alfred Haar, the Haar wavelet is a series of square-shaped functions that when shown together form a basis otherwise known as a wavelet family and expressed in terms of an orthonormal function basis. For more information, see: [[http://en.wikipedia.org/wiki/Haar_wavelet]]
 
; Haar-like features: Uses adjacent rectangular regions in a specified detection window, sums up the pixel intensities and calculates the difference between those sums. They are called Haar-like features because they are computed using similar coeffients in the Haar wavelet transforms. These regions can then be concatenated with ''boosted classifiers'' into a ''classifier cascade'' to cross reference these regions with other positive samples to form a model for object detection. For more information, see [[http://en.wikipedia.org/wiki/Haar-like_features]] [[http://opencv.willowgarage.com/wiki/FaceDetection]]
 
; Boosted Classifiers: The result of increasing the accuracy of a classifier cascade through the means of positive object recognition of scaled images.
 
; Boosting: Attempts to produce new classifiers that are better able to predict examples for which the current ensemble's performance is poor. [[http://www.cs.cmu.edu/afs/cs/project/jair/pub/volume11/opitz99a-html/node4.html]]
 
; Classifier Cascade: Proposed by Paul Viola and refined by Rainer Lienhart, a classifier cascade
 
; ROI (Region of Interest): This is usually a subset of the original frame represented as a rectangle. It is most often compared to a classifier cascade to determine a positive match.
 
; Ocvfw (OpenCV FrameWork): A MouseTrap in house framework that manages OpenCV methods and includes functions to initiate the camera and detect further Haar-like features.
 
; Optical flow: Detecting the pattern of motion of moving objects. [[http://www.youtube.com/watch?v=WS3naWwfToI]] [[http://robots.stanford.edu/cs223b05/notes/CS%20223-B%20T1%20stavens_opencv_optical_flow.pdf]]
 
; Lucas-Kanade method: Differential method to estimate optical flow, it combines several nearby pixels to resolve the ambiguity of the optical flow equation [[http://en.wikipedia.org/wiki/Lucas%E2%80%93Kanade_method]].
 
; Singleton: Restricts the instantiation of a class to one object so coordination over program actions can be achieved.
 
 
== Classes ==
 
 
=== app ===
 
==== main.py ====
 
* Loads the Image Detection Module (idm)
 
*
 
==== commons.py ====
 
* Stores global variables for Ocvfw
 
  * cv: OpenCV related variables
 
  * hg: OpenCV.highgui related variables. Set in ocvfw/idm/ color.py to be used in the creation of the Window and Trackbar
 
  * abs_path: the absolute path to the commons file
 
  * haar_cds: array with the haar xml file paths
 
  * colors: array with the types of colors the image can appear in
 
  * singleton: class instance to be passed through program globally
 
 
* '''Methods:'''
 
  get_ch(color): returns channel corresponding to the color given (rgb, bgr, gray)
 
 
==== environment.py ====
 
 
==== debug.py ====
 
* This is used to display messages to the command line
 
* Very helpful to troubleshoot
 
 
=== ui ===
 
 
==== main.py ====
 
 
=== lib ===
 
 
=== addons ===
 
 
== OpenCV Framework (ocvfw) ==
 
* This is the wrapper around OpenCV
 
* View diagram here: [[https://github.com/amberheilman/mousetrap/blob/master/docs/MouseTrap.jpeg]]
 
 
 
=== ocvfw/ _ocv.py ===
 
* Contains three classes:
 
 
  OcvfwBase: direct copy of backends/ OcvfwBase
 
  OcvfwPython: direct copy of backends/ OcvfwPython
 
  OcvfwCtypes: direct copy of backends/ OcvfwCtypes
 
 
=== ocvfw/ idm ===
 
* Detects features based on an xml file
 
 
=== ctypesopencv ===
 
* This is used instead of the python bindings for OpenCV
 
 
=== pocv.py ===
 
* Returns an instance of the idm
 
 
=== ocvfw/ haars ===
 
* Used thousands of samples to concatenate an xml file to predict features
 
 
===dev/ camera===
 
* checks for gtk
 
* loads the camera backend
 
* sets Camera as a singleton with backend as the base
 
* Class Capture
 
** first gets a region of interest and then matches it against a haar classifier
 
** Sets all variables associated with video capture
 
**'''Methods:'''
 
  set_async(fps, async): sets the frames per second and whether the image should have asynchronous querying.
 
                        If it is true, then it will set a gobject timeout of the specified frames per second
 
  sync(): Synchronizes the Capture image with the Camera image
 
  set_camera(key, value): sets the Camera object with key and value specified
 
  image(new_img): sets the self.__image variable to specified value
 
  resize(width, height, copy): manipulates the self.__image variable and resizes it using cv.Resize() and width
 
                              and height given, will not replace the self.__image if copy is True.
 
  to_gtk_buff(): Converts image to gtkImage and returns it
 
  points(): returns self.__graphics["point"], a list with rectangles that have been added?
 
  rectangles(): returns self.__graphics["rect"], a list with rectangles that have been added
 
  show_rectangles(rectangles): draws the rectangles onto the self.__image
 
  original(): returns the Capture object with the self.__image_orig image, setting the Capture to the original image
 
  rect(*args): uses the args (a rectangle) to get a sub-part of the self.__image using cv.GetSubRect()
 
  flip(flip): flip is a string that can contain 'hor' 'ver' or 'both' to use cv.Flip() to manipulate the
 
              self.__image. Returns self.__image
 
  color(new_color, channel, copy): if new_color is true it will set the image to the new channel provided.
 
                                  If copy is set, it will only manipulate a new image and keep the
 
                                  existing image as is.
 
  change(size, color, flip): will set self.__color_set to the new color value and set self.__flip
 
                            to the new flip value. Does not currently support the change in image size.
 
  add(graphic): has checks to see if the capture is locked or if the graphic exists already. Otherwise
 
                it will add the graphic passed to it to the image using set_lkpoint()
 
  remove(label): removes a graphic object from self.__graphics[] by its label.
 
  get_area(haar_csd, roi, orig): uses the haartraining file (haar_csd) to it will use get_haar_points()
 
                                or get_haar_roi_points() depending whether roi is set. It can also get
 
                                the area within an area using the roi and setting the origin point.
 
  message(message): does nothing, just pass
 
  lock(): sets the self.__lock to true, which is used in add() and remove()
 
  unlock(): sets self.__lock to false
 
  is_locked(): returns self.__lock
 
 
 
*'''Classes'''
 
  '''Graphic()'''
 
    init(): x and y stored in a list coords[x, y]
 
            size: list [width, height]
 
            type: could be a point
 
            label: string
 
            color: rgb color or tuple
 
            follow: used for optical flow
 
            parent: what is the parent class
 
    is_point(): checks if type is true
 
 
  '''Point(): contains a graphic and additional variables and methods'''
 
    init(): graphic(**args)
 
            __ocv: opencv attribute
 
            last: opencv attribute
 
            diff: difference between two points
 
            abs_diff: difference between original and current
 
            rel_diff: difference between last and current
 
            orig: an opencv Point objec
 
    set_opencv(opencv): updates the current attributes, updates the points in abs_diff and rel_diff
 
                        and sets self.__ocv to opencv given
 
    opencv(): returns the graphic object with the opencv attributes (__ocv)
 
 
===backend===
 
* link to image on how the backend distributes variables [[http://gnome-mousetrap.sourcearchive.com/documentation/0.4-2/classmousetrap_1_1ocvfw_1_1__ocv_1_1OcvfwBase__inherit__graph.png"]]
 
 
* Contains three classes:
 
 
====backend/ OcvfwBase====
 
* sets image variables
 
* '''Methods:'''
 
  set(key, value): sets the key (image variables) to the value specified
 
  lk_swap(set): switches the boolean of the Lucas-Kanade points, if true, it will append current to last
 
  new_image(size, num, ch): Will CreatImage(size, depth, channel) using a Size(width, height), depth and channel
 
  set_camera_idx(idx): sets the global var self.idx to specified idx number
 
  wait_key(num): uses cv WaitKey() and inputs number specified
 
  start_camera(params): grabs the video capture and sets it as a global variable
 
  query_image: grabs the first frame and creates self.img, the pyramids and grey images for optical flow.
 
              Uses wait_key(). returns true
 
  set_lkpoint(point): uses cv.Point, sets the self.img_lkpoints image, uses dev/ camera.set_opencv()
 
                      to manipulate the graphic object (made my Mousetrap), sets the ["current"] using FindCornerSubPix()
 
                      and if ["last"] exists, it appends current, appends point to ["points"]
 
  clean_lkpoints(): sets self.img_lkpoints current, last and points to empty
 
  show_lkpoints(): calculates the optical flow and assigns it to the ["current"] lkpoints if it resolves.
 
                  Recursively goes through ["points"]  and draws them, then sets ["current"] back to points
 
  swap_lkpoints(): only after the new points were shown, swap prev with original and current with last
 
 
====backend/ OcvfwPython====
 
* inherits from OcvfwBase
 
* imports global and local variables from Commons.hg (highgui related variables) and Commons.cv(Opencv) variables
 
* has the ability to get_motion_points but is not used
 
* has the ability to add_message to the image shown but is not used
 
* '''Methods:'''
 
  get_haar_roi_points: finds regions of interest within the entire frame image and returns the matches against the classifier cascade
 
                      using the ''HaarDetectObjects()'' OpenCV function.
 
  get_haar_points: resizes the image by 1.5
 
 
====backend/ OcvfwCtypes====
 
* imports global and local variables from Commons.hg (highgui), Commons.cv(which is a CV common lib), and OcvfwBase
 
 
== Files affected by OpenCV 2.4.3 upgrade ==
 
* main.py
 
* ocvfw
 
** _ocv.py
 
** ocvfw/ idm
 
*** eyes.py
 
*** finger.py
 
*** forehead.py
 
*** color.py
 
** commons.py
 
** ocvfw/ dev
 
*** camera.py
 
** ocvfw/ backends
 
*** OcvfwBase.py
 
*** OcfwCtypes.py
 
*** OcvfwPython.py
 
 
= Opencv =
 
 
== General Concerns ==
 
Here we place all of the concerns when migrating to a newer version of opencv or the repercussions of not upgrading.
 
 
=== Migration to Python 3 ===
 
* Opencv 2.4.3 does not support Python 3
 
 
=== Camera drivers ===
 
* Currently, only some camera drivers are supported with the new version of opencv, leading to problems regarding capture. One issue is the inability to cancel the program webcam capture.
 
 
=== Yum repo ===
 
* Right now, only the 2.3.1 version of opencv is available in the yum repository
 
 
=OpenCV Info=
 
 
===Basics and Source Code===
 
 
OpenCV (Open Source Computer Vision) is a library of programming functions for the realtime computer vision. OpenCV is released under the liberal BSD license and hence it's free for both academic and commercial use. It has C++, C, Python and Java (Android) interfaces and supports Windows, Linux, Android and Mac OS. The library has more than 2500 optimized algorithms. Adopted all around the world, OpenCV has more than 47 thousand people of user community and estimated number of downloads exceeding 5 million. Usage ranges from interactive art, to mines inspection, stitching maps on the web or through advanced robotics.
 
 
To access the OpenCV repository directly: git clone git://github.com/itseez/opencv.git
 
* A good source for this is: http://code.opencv.org/projects/opencv/wiki/Working_with_OpenCV_git_repository
 
* The rest of the history plus matches between git commits and SVN revisions are stored at a separate "OpenCV Attic" repository: git://code.opencv.org/opencv_attic.git.
 
* Also OpenCV Extra was put to a separate repository: git://code.opencv.org/opencv_extra.git.
 
 
===History===
 
"OpenCV was started at Intel in 1999 by Gary Bradski for the purposes of accelerating research in and commercial applications of computer vision in the world and, for Intel, creating a demand for ever more powerful computers by such applications. Vadim Pisarevsky joined Gary to manage Intel's Russian software OpenCV team. Over time the OpenCV team moved on to other companies and other Research. Several of the original team eventually ended up working in robotics and found their way to Willow Garage. In 2008, Willow Garage saw the need to rapidly advance robotic perception capabilities in an open way that leverages the entire research and commercial community and began actively supporting OpenCV, with Gary and Vadim once again leading the effort." NEED A REF OR LINK
 
 
----
 
So what is OpenCV?
 
* OpenCV is a computer vision library in C++.
 
* OpenCV 2 was released in 2009 with additional functionality and increased performance
 
* CV is a Python wrapper for OpenCV
 
* CV2 is a Python wrapper for OpenCV 2 and includes CV
 
** If you want CV functions you have to
 
 
import cv2.cv as cv
 
 
The full name is "Open Source Computer Vision Library." It is a library of programming functions aimed at real-time computer vision. It was developed by Intel and now Willow Garage and Itseez support it. It is free for use under the open source BSD license. Also, it can cross-platform.
 
 
 
 
----
 
Where did it come from?
 
 
OpenCV emerged from the Intel Research Initiative; it is related to Intel's Performance Library, which today is called Integrated Performance Primitives (IPP). The project launched in 1999. It was looking for CPU-intensive applications. The project goals were to:
 
#advance vision research by open and optimized infrastructure
 
#disseminate vision knowledge with readable code
 
#and advance commercial applications
 
----
 
'''Version List:'''
 
alpha-release at CVPR 2000
 
five beta-releases 2001-2005
 
Version 1.0 2006
 
Continuation of development by Willow Garage 2008 (pre-release version 1.1)
 
Version 2.0 2009
 
Versions 2.1, 2.2 2010
 
Version 2.3 2011
 
Version 2.4.0 May 2012
 
Version 2.4.1 June 2012
 
Version 2.4.2 July 2012
 
----
 
'''Application Use:'''
 
2D and 3D Feature Toolkits
 
Egomotion Estimation
 
''Facial Recognition System''
 
''Gesture Recognition''
 
''Human-Computer Interaction (HCI)''
 
Mobile Robotics
 
Motion Analysis
 
Object Detction and Recognition
 
Segmentation
 
Stereo Vision: Depth Perception from 2 Cameras
 
Structure from Motion (SFM)
 
''Motion Tracking''
 
 
(http://www.cvl.isy.liu.se/education/graduate/opencv/Lecture1_History.pdf)
 
 
===Definitions===
 
====Modules Available====
 
*'''core:''' a compact module defining basic data structures, including the dense multi-dimensional array Mat and basic functions
 
used by all other modules
 
*'''imgproc:''' an image processing module that includes linear and non-linear image filtering, geometrical image transformations
 
(resize, affine and perspective warping, generic table-based remapping), color space conversion, histograms, and so on
 
*'''video:''' a video analysis module that includes motion estimation, background subtraction, and object tracking algorithms
 
*'''calib3d:''' basic multiple-view geometry algorithms, single and stereo camera calibration, object pose estimation, stereo
 
correspondence algorithms, and elements of 3D reconstruction
 
*'''features2d:''' salient feature detectors, descriptors, and descriptor matchers
 
*'''objdetect:''' detection of objects and instances of the predefined classes (for example, faces, eyes, mugs, people, cars, and so on)
 
*'''highgui:''' an easy-to-use interface to video capturing, image and video codecs, as well as simple UI capabilities
 
*'''gpu:''' GPU-accelerated algorithms from different OpenCV modules
 
*some other helper modules, such as FLANN and Google test wrappers, Python bindings, and others
 
 
(http://docs.opencv.org/)
 
 
=== OpenCV - Great Resources===
 
* http://www.cvl.isy.liu.se/education/graduate/opencv/Lecture1_History.pdf
 
* http://docs.opencv.org/
 
* http://docs.opencv.org/opencv_tutorials.pdf (Tutorial)
 
* http://opencv.org/documentation.html
 
* http://www.pages.drexel.edu/~nk752/tutorials.html (Tutorials)
 
* http://www.laganiere.name/opencvCookbook/ (Companion Site to an OpenCV2 Book)
 
* http://stackoverflow.com/questions/10417108/what-is-different-between-all-these-opencv-python-interfaces (Difference between CV and CV2)
 
* http://opencv-users.1802565.n2.nabble.com/ (OpenCV mailing list archive)
 
* http://pr.willowgarage.com/wiki/OpenCVMeetingNotes (OpenCV meeting minutes)
 
** As of May 22, 2012, the meetings notes moved to: http://code.opencv.org/projects/opencv/wiki/Meeting_notes
 
* http://opencv.willowgarage.com/wiki/OpenCV%20Change%20Logs (OpenCV Change Logs)
 
 
=Opencv 2 Info=
 
 
==Some Differences Between cv and cv2==
 
 
* single import of all of OpenCV using <big>import cv</big>
 
* OpenCV functions no longer have the "cv" prefix
 
* simple types like CvRect and CvSca,lar use Python tuples
 
* sharing of Image storage, so image transport between OpenCV and other systems (e.g. numpy and ROS) is very efficient
 
* complete documentation for the Python functions
 
 
== OpenCV 2 - Great Resources ==
 
 
=== cv 2.4.3 ===
 
* http://docs.opencv.org -> documentation
 
 
* http://docs.opencv.org/opencv2refman.pdf / http://cvhci.anthropomatik.kit.edu/download/visionhci09/opencv.pdf -> pdf documentation resources
 
 
* http://fossies.org/dox/OpenCV-2.4.3/index.html -> Complete Hierarchical guide to opencv
 
 
* http://docs.opencv.org/doc/tutorials/tutorials.html -> tutorials on opencv
 
 
* http://docs.scipy.org/doc/numpy/reference/arrays.ndarray.html#array-attributes -> structure of numpy arrays
 
 
* Books:
 
  Author: Laganière, Robert.
 
  Title: OpenCV 2 computer vision application programming cookbook [electronic resource] :
 
  over 50 recipes to master this library of programming functions for real-time computer vision / Robert Laganière.
 
  Imprint Birmingham, U.K. : Packt Open Source Pub., 2011.
 
  Availability: eBook in the Drexel Library.
 
 
  Author: Gary Bradski; Adrian Kaehler
 
  Title: Learning OpenCV 
 
  Publisher: O'Reilly Media, Inc.
 
  Pub. Date: September 24, 2008
 
  Availability: ACM Safari books collection for anyone who is a member.
 
 
==== A cv2 example ====
 
Here is a little program that will capture video from the webcam and display it (you might have to CTRL+C)
 
"""
 
This module is used for testing the opencv2 capabilities
 
"""
 
import cv2
 
#get webcam feed
 
capture = cv2.VideoCapture(0)
 
while True:
 
    #combines VideoCapture.grab() and VideoCapture.retrieve()
 
    retrieval_value, image = capture.read()
 
    #shows captured image in a window
 
    cv2.imshow("webcam", image)
 
    #will stop capture with capatible webcam
 
    if cv2.waitKey(10) == 27:
 
        break
 
 
=== cv 2.1 ===
 
* http://opencv.willowgarage.com/documentation/python/index.html -> documentation
 
* http://nullege.com/codes/search/cv -> code samples
 
 
=OpenCV Community=
 
* The #opencv IRC channel on Freenode -- currently 'Un-official'
 
* http://opencv.willowgarage.com/wiki/FullOpenCVWiki#Welcome.2FPeople.People
 
* http://tech.groups.yahoo.com/group/OpenCV/ -> User Group on Yahoo
 
* http://answers.opencv.org/questions/ -> Q&A
 
* http://code.opencv.org/projects/opencv/issues -> Bugs
 
 
=Code Meeting Notes=
 
* '''Meetings occur on Wednesday's at noon (starting on 2/20) on irc.gnome.org #mousetrap - holiday and other circumstances permitting.'''
 
== 2.19.2013 ==
 
=== Agenda ===
 
# Status
 
# Go over git+mousetrap installation [[http://www.xcitegroup.org/foss2serve/index.php/MouseTrap_Dev_Help#Complete_Git_.2B_Mousetrap_Install]]
 
# Address any issues
 
# Go over mousetrap hierarchy [[http://www.xcitegroup.org/foss2serve/index.php/MouseTrap_Dev_Help#The_MouseTrap_Program]]
 
# Go over opencv integration methods [[http://www.xcitegroup.org/foss2serve/index.php/MouseTrap_Dev_Help#Files_affected_by_OpenCV_2.4.3_upgrade]]
 
# Next Steps
 
 
 
===Meeting summary===
 
---------------
 
'''Status'''
 
  * LINK:
 
    http://gnome-mousetrap.sourcearchive.com/documentation/0.4-2/main.html
 
    (amber, 18:14:27)
 
  * LINK:
 
    http://gnome-mousetrap.sourcearchive.com/documentation/0.4-2/main.html
 
    (amber, 18:14:29)
 
  * LINK: http://code.opencv.org/projects/opencv/wiki/Meeting_notes
 
    (Dark_Rose, 18:17:39)
 
  * LINK:
 
    http://www.xcitegroup.org/foss2serve/index.php/MouseTrap_Dev_Help#The_MouseTrap_Program
 
    (amber, 18:20:24)
 
 
'''Git and mousetrap installation'''
 
  * creating your own branch in git: git checkout -b my_fantastic_branch
 
    (Stoney, 18:29:24)
 
  * LINK:
 
    http://git-scm.com/book/en/Git-Branching-Basic-Branching-and-Merging
 
    (amber, 18:32:31)
 
 
'''Issues'''
 
  * LINK: http://code.google.com/p/pyopencv/  (john, 18:46:37)
 
  * ctypes-opencv is a package that brings Willow Garage's (formerly
 
    Intel's) Open Source Computer Vision Library (OpenCV) to Python.
 
    OpenCV is a collection of algorithms and sample code for various
 
    computer vision problems. The goal of ctypes-opencv is to provide
 
    Python access to all documented functionality of OpenCV.  (amber,
 
    18:46:39)
 
  * LINK: http://code.google.com/p/pyopencv/  (amber, 18:46:46)
 
  * http://pythonhosted.org/pyopencv/2.1.0.wr1.2.0/  (amber, 18:51:37)
 
  * LINK: http://pythonhosted.org/pyopencv/2.1.0.wr1.2.0/  (amber,
 
    18:51:45)
 
 
'''Integration methods'''
 
  * LINK:
 
    http://gnome-mousetrap.sourcearchive.com/documentation/0.4-2/dir_9d66d24675e04ade776f5269c3621ea1.html
 
    (amber, 18:58:53)
 
  * LINK:
 
    http://gnome-mousetrap.sourcearchive.com/documentation/0.3plus-psvn17-2ubuntu1/classmouseTrap_1_1ocvfw_1_1ocvfw.html
 
    (amber, 19:02:40)
 
  * ACTION: Create Module documentation for MouseTrap  (amber, 19:14:55)
 
 
Meeting ended at 19:16:36 CET.
 
 
'''Action Items'''
 
------------
 
* Create Module documentation for MouseTrap
 
 
'''People Present (lines said)'''
 
---------------------------
 
* amber (125)
 
* john (39)
 
* Stoney (31)
 
* logan_h (14)
 
* darci (8)
 
* Dark_Rose (5)
 
* tota11y (2)
 
 
=== Conclusions ===
 
* documentation on Mousetrap's OpenCV FrameWork is needed for further inspection.
 
 
== 2.26.2013 ==
 
* Address meeting reschedule
 
  
 
[[Category:Gnome_MouseTrap]]
 
[[Category:Gnome_MouseTrap]]

Revision as of 00:03, 13 March 2013

Contents

Standard Dev Environment

  • Fedora 18
  • OpenCV 2.4.X - whatever recent version is supported
  • Python 2.X for now upgrading to Python 3 when we get the OpenCV issue fixed

Complete Git + Mousetrap Install

Let's get git

 Open a terminal, become root
 Run Commands:
 -> cd /opt
 -> mkdir git                                                * just fyi you can put your repo anywhere, this is just how I was taught
 -> yum install git-core                                     * Install git
 -> cd /opt/git
 -> git clone git://github.com/amberheilman/mousetrap.git  * Now you have pulled down the git repo
 -> git checkout fix_install                                 * This is my branch
 -> git pull                                                 * DO NOT MAKE CHANGES HERE! We will all be working in separate branches!

Make a branch

 -> git branch INSERT_BRANCH_NAME      * this clones my branch and creates a new one
 -> git checkout INSERT_BRANCH_NAME    * this opens up your branch that you just created
 -> git branch                         * now you can see the branch your in with a *, and all others under this git repo

Mousetrap Install method

 Install Dependencies:
 -> yum install gnome-common
 -> yum install glib2-devel
 -> yum install intltool
 -> yum install python-devel
 -> yum install opencv-python
 -> yum install python-xlib
 Run Commands:
 -> cd /opt/git/mousetrap/src          * THIS WILL NOT WORK OUTSIDE THIS DIR!
 -> git branch fix_install
 -> ./autogen.sh
 -> make
 -> make install
 -> mousetrap                          * You may have errors but they should be similar to my own (towards the end)

Make your first commit

 -> git status                         * Shows all of the modified files
 -> git add *                          * This is to add ALL files to commit list (MAKE SURE YOU WANT THEM ALL FIRST!)
 -> git commit                         * Add a useful title to first line of your commit.
                                         This is in vim so 'i' to insert ':x' to save and quit.
 -> git push origin INSERT_BRANCH_NAME * This must be the branch you created in the git install.
                                         This will ask for your git credentials, so have them ready.
Personal tools
Namespaces
Variants
Actions
Events
Learning Resources
HFOSS Projects
Evaluation
Navigation
Toolbox