Show Table of Contents
Red Hat Training
Author: alex I live in the SoBro neighborhood of Louisville, KY with my daughter and our cat Apollo. I enjoy being challenged by technology. Cannot open include file: 'unistd.h': No such file or directory 295 Found conflicts between different versions of the same dependent assembly that could not be resolved.
A Red Hat training course is available for Red Hat Enterprise Linux
An introduction to application development tools in Red Hat Enterprise Linux 6
Abstract
Installing the git Package
root: Configuring the Default Text Editor
git commit, require the user to write a short message or make some changes in an external text editor. To determine which text editor to start, Git attempts to read the value of the GIT_EDITOR environment variable, the core.editor configuration option, the VISUAL environment variable, and finally the EDITOR environment variable in this particular order. If none of these options and variables are specified, the git command starts vi as a reasonable default option. core.editor configuration option in order to specify a different text editor, type the following at a shell prompt: Example 1.1. Configuring the Default Text Editor
vim as the default text editor, type the following at a shell prompt: Setting Up User Information
Example 1.2. Setting Up User Information
John Doe as your full name and john@example.com as your email address, type the following at a shell prompt: Initializing an Empty Repository
.git in which all repository information is stored. Importing Data to a Repository
-m option, this command allows you to write the commit message in an external text editor. For information on how to configure the default text editor, see the section called “Configuring the Default Text Editor”. Adding Files and Directories
Renaming Files and Directories
Deleting Files and Directories
Viewing the Current Status
new file, renamed, deleted, or modified) and tells you which changes will be applied the next time you commit them. For information on how to commit your changes, see Section 1.1.6, “Committing Changes”. Viewing Differences
-a command line option as follows: -m option, the command allows you to write the commit message in an external text editor. For information on how to configure the default text editor, see the section called “Configuring the Default Text Editor”. Pushing Changes to a Public Repository
origin. Creating Patches from Individual Commits
Installed Documentation
- gittutorial(7) — The manual page named gittutorial provides a brief introduction to Git and its usage.
- gittutorial-2(7) — The manual page named gittutorial-2 provides the second part of a brief introduction to Git and its usage.
- Git User's Manual — HTML documentation for Git is located at
/usr/share/doc/git-1.7.1/user-manual.html.
Online Documentation
- Pro Git — The online version of the Pro Git book provides a detailed description of Git, its concepts and its usage.
Installing the subversion Package
root: Setting Up the Default Editor
svn import or svn commit require the user to write a short log message. To determine which text editor to start, the svn client application first reads the contents of the environment variable $SVN_EDITOR, then reads more general environment variables $VISUAL and $EDITOR, and if none of these is set, it reports an error. $SVN_EDITOR environment variable, run the following command: export SVN_EDITOR=command line to your ~/.bashrc file. Replace command with a command that runs the editor of your choice (for example, emacs). Note that for this change to take effect in the current shell session, you must execute the commands in ~/.bashrc by typing the following at a shell prompt: Example 1.3. Setting up the default text editor
Initializing an Empty Repository
/var/svn/). If the directory does not exist, svnadmin create creates it for you. Example 1.4. Initializing a new Subversion repository
~/svn/ directory, type: Importing Data to a Repository
. for the current working directory), svn_repository is a URL of the Subversion repository, and remote_path is the target directory in the Subversion repository (for example, project/trunk). Example 1.5. Importing a project to a Subversion repository
~/svn/ (in this example, /home/john/svn/). To import the project under project/trunk in this repository, type: Example 1.6. Checking out a working copy
~/svn/ directory (in this case, /home/john/svn/) and that this repository contains the latest version of a project in the project/trunk subdirectory. To check out a working copy of this project, type: Adding a File or Directory
svn commit command as described in Section 1.2.6, “Committing Changes”. Example 1.7. Adding a file to a Subversion repository
ChangeLog, all files and directories within this directory are already under revision control. To schedule this file for addition to the Subversion repository, type: Renaming a File or Directory
svn commit command as described in Section 1.2.6, “Committing Changes”. Example 1.8. Renaming a file in a Subversion repository
LICENSE file for renaming to COPYING, type: svn move automatically renames the file in your working copy: Deleting a File or Directory
svn commit command as described in Section 1.2.6, “Committing Changes”. Example 1.9. Deleting a file from a Subversion repository
TODO file for removal from the SVN repository, type: svn delete automatically deletes the file from your working copy: Viewing the Status
A for a file that is scheduled for addition, D for a file that is scheduled for removal, M for a file that contains local changes, C for a file with unresolved conflicts, ? for a file that is not under revision control). Example 1.10. Viewing the status of a working copy
ChangeLog, which is scheduled for addition to the Subversion repository, all files and directories within this directory are already under revision control. The TODO file, which is also under revision control, has been scheduled for removal and is no longer present in the working copy. The LICENSE file has been renamed to COPYING, and Makefile contains local changes. To display the status of such a working copy, type: Viewing Differences
Example 1.11. Viewing changes to a working copy
Makefile contains local changes. To view these changes, type: Example 1.12. Committing changes to a Subversion repository
ChangeLog is scheduled for addition to the Subversion repository, Makefile already is under revision control and contains local changes, and the TODO file, which is also under revision control, has been scheduled for removal and is no longer present in the working copy. Additionally, the LICENSE file has been renamed to COPYING. To commit these changes to the Subversion repository, type: Example 1.13. Updating a working copy
ChangeLog to the repository, removed the TODO file from it, changed the name of LICENSE to COPYING, and made some changes to Makefile. To update this working copy, type: Installed Documentation
svn help— The output of thesvn helpcommand provides detailed information on the svn usage.svnadmin help— The output of thesvnadmin helpcommand provides detailed information on the svnadmin usage.
Online Documentation
- Version Control with Subversion — The official Subversion website refers to the Version Control with Subversion manual, which provides an in-depth description of Subversion, its administration and its usage.
Installing the cvs Package
root: Setting Up the Default Editor
cvs import or cvs commit require the user to write a short log message. To determine which text editor to start, the cvs client application first reads the contents of the environment variable $CVSEDITOR, then reads the more general environment variable $EDITOR, and if none of these is set, it starts vi. $CVSEDITOR environment variable, run the following command: export CVSEDITOR=command line to your ~/.bashrc file. Replace command with a command that runs the editor of your choice (for example, emacs). Note that for this change to take effect in the current shell session, you must execute the commands in ~/.bashrc by typing the following at a shell prompt: Example 1.14. Setting up the default text editor
Initializing an Empty Repository
/var/cvs/). Alternatively, you can specify this path by changing the value of the $CVSROOT environment variable: cvs init and other CVS-related commands: Example 1.15. Initializing a new CVS repository
~/cvs/ directory, type: Importing Data to a Repository
project), and vendor_tag and release_tag are vendor and release tags. Example 1.16. Importing a project to a CVS repository
~/cvs/. To import the project under project in this repository with vendor tag mycompany and release tag init, type: project). Alternatively, you can set the $CVSROOT environment variable as follows: cvs checkout command without the -d option: Example 1.17. Checking out a working copy
~/cvs/ and that this repository contains a module named project. To check out a working copy of this module, type: Adding a File
cvs commit command as described in Section 1.3.6, “Committing Changes”. Example 1.18. Adding a file to a CVS repository
ChangeLog, all files and directories within this directory are already under revision control. To schedule this file for addition to the CVS repository, type: Deleting a File
cvs commit command as described in Section 1.3.6, “Committing Changes”. Example 1.19. Removing a file from a CVS repository
TODO file for removal from the CVS repository, type: Viewing the Status
Up-to-date, Locally Added, Locally Removed, or Locally Modified) and revision. However, if you are only interested in what has changed in your working copy, you can simplify the output by typing the following at a shell prompt: Example 1.20. Viewing the status of a working copy
ChangeLog, which is scheduled for addition to the CVS repository, all files and directories within this directory are already under revision control. The TODO file, which is also under revision control, has been scheduled for removal and is no longer present in the working copy. Finally, Makefile contains local changes. To display the status of such a working copy, type: Viewing Differences
Example 1.21. Viewing changes to a working copy
Makefile contains local changes. To view these changes, type: Example 1.22. Committing changes to a CVS repository
ChangeLog is scheduled for addition to the CVS repository, Makefile already is under revision control and contains local changes, and the TODO file, which is also under revision control, has been scheduled for removal and is no longer present in the working copy. To commit these changes to the CVS repository, type: Example 1.23. Updating a working copy
ChangeLog to the repository, removed the TODO file from it, and made some changes to Makefile. To update this working copy, type: Installed Documentation
- cvs(1) — The manual page for the cvs client program provides detailed information on its usage.
- Slower application startup time.
- Security measures like load address randomization cannot be used.
- Dynamic loading of shared objects outside of glibc is not supported.
rpm -qpi compat-glibc-* will provide some information on how to use this package. GNU C++ Standard Library, which is an ongoing project to implement the ISO 14882 Standard C++ library. libstdc++. man pages for library components, install the libstdc++-docs package. This will provide man page information for nearly all resources provided by the library; for example, to view information about the vector container, use its fully-qualified component name: man std::vector. /usr/share/doc/libstdc++-docs-version/html/spine.html. boost package is actually a meta-package, containing many library sub-packages. These sub-packages can also be installed individually to provide finer inter-package dependency tracking. /usr/share/doc/boost-doc-version/index.html. qt package provides the Qt (pronounced 'cute') cross-platform application development framework used in the development of GUI programs. Aside from being a popular 'widget toolkit', Qt is also used for developing non-GUI programs such as console tools and servers. Qt was used in the development of notable projects such as Google Earth, KDE, Opera, OPIE, VoxOx, Skype, VLC media player and VirtualBox. It is produced by Nokia's Qt Development Frameworks division, which came into being after Nokia's acquisition of the Norwegian company Trolltech, the original producer of Qt, on June 17, 2008. - Advanced Graphics Effects: options for opacity, drop-shadows, blur, colorization, and other similar effects
- Animation and State Machine: create simple or complex animations without the hassle of managing complex code
- Support for new platforms
- Windows 7, Mac OSX 10.6, and other desktop platforms are now supported
- Added support for mobile development; Qt is optimized for the upcoming Maemo 6 platform, and will soon be ported to Maemo 5. In addition, Qt now supports the Symbian platform, with integration for the S60 framework.
- Added support for Real-Time Operating Systems such as QNX and VxWorks
- Improved performance, featuring added support for hardware-accelerated rendering (along with other rendering updates)
- Integrated GUI layout and forms designer
- Integrated, context-sensitive help system
- Rapid code navigation tools
qt-doc package provides HTML manuals and references located in /usr/share/doc/qt4/html/. This package also provides the Qt Reference Documentation, which is an excellent starting point for development within the Qt framework. qt-demos and qt-examples. To get an overview of the capabilities of the Qt framework, see /usr/bin/qtdemo-qt4 (provided by qt-demos). kdelibs-devel package provides the KDE libraries, which build on Qt to provide a framework for making application development easier. The KDE development framework also helps provide consistency across the KDE desktop environment. kspell2 in KDE4. kdelibs-apidocs package provides HTML documentation for the KDE development framework in /usr/share/doc/HTML/en/kdelibs4-apidocs/. The following links also provide details on KDE-related programming tasks: gnome-power-manager. It was introduced in Red Hat Enterprise Linux 5 and provides a complete and integrated solution to power management under the GNOME desktop environment. In Red Hat Enterprise Linux 6, the storage-handling parts of hal was replaced by udisks, and the libgnomeprint stack was replaced by print support in gtk2. gnome-power-management are shipped with the various Red Hat Enterprise Linux versions. Table 2.1. Desktop Components Comparison
| Red Hat Enterprise Linux Version | ||
|---|---|---|
GNOME Power Management Desktop Components | 5 | |
hal | 0.5.8 | |
udisks | N/A | |
glib2 | 2.12.3 | |
gtk2 | 2.10.4 | |
gnome-vfs2 | 2.16.2 | |
libglade2 | 2.6.0 | |
libgnomecanvas | 2.14.0 | |
gnome-desktop | 2.16.0 | |
gnome-media | 2.16.1 | |
gnome-python2 | 2.16.0 | |
libgnome | 2.16.0 | |
libgnomeui | 2.16.0 | |
libgnomeprint22 | 2.12.1 | |
libgnomeprintui22 | 2.12.1 | |
gnome-session | 2.16.0 | |
gnome-power-manager | 2.16.0 | |
gnome-applets | 2.16.0 | |
gnome-panel | 2.16.1 | |
Some of the differences in glib between version 2.4 and 2.12 (or between Red Hat Enterprise Linux 4 and Red Hat Enterprise Linux 5) are:
- GKeyFile (a key/ini file parser)
- GMappedFile (a map wrapper)
- GBookmarkFile (a bookmark file parser)
- Native atomic ops on s390
- Atomic reference counting for GObject
Some of the differences in glib between version 2.12 and 2.22 (or between Red Hat Enterprise Linux 5 and Red Hat Enterprise Linux 6) are:
- GSequence (a list data structure that is implemented as a balanced tree)
- Support for monotonic clocks
- GIO (a VFS library to replace gnome-vfs)
- GChecksum (support for hash algorithms such as MD5 and SHA-256)
- Support for sockets and network IO in GIO
- GMarkup performance improvements
Some of the differences in GTK+ between version 2.4 and 2.10 (or between Red Hat Enterprise Linux 4 and Red Hat Enterprise Linux 5) are:
- GtkAboutDialog
- GtkFileChooserButton
- GtkAssistant
- GtkRecentChooser
- GtkCellRendererProgress
- GtkCellRendererSpin
- Printing Support
- Ellipsisation support in labels, progressbars and treeviews
- Improved themability
Some of the differences in GTK+ between version 2.10 and 2.18 (or between Red Hat Enterprise Linux 4 and Red Hat Enterprise Linux 5) are:
- GtkVolumeButton
- GtkBuilder to replace libglade
- GtkMountOperation
- Scale marks
- Support runtime font configuration changes
key3.db and cert8.db are also replaced with new SQL databases called key4.db and cert9.db. These new databases will store PKCS #11 token objects, which are the same as what is currently stored in cert8.db and key3.db. /etc/pki/nssdb where globally trusted CA certificates become accessible to all applications. The command rv = NSS_InitReadWrite('sql:/etc/pki/nssdb'); initializes NSS for applications. If the application is run with root privileges, then the system-wide database is available on a read and write basis. However, if it is run with normal user privileges it becomes read only. python package adds support for the Python programming language. This package provides the object and cached bytecode files required to enable runtime support for basic Python programs. It also contains the python interpreter and the pydoc documentation tool. The python-devel package contains the libraries and header files required for developing Python extensions. python-related packages. By convention, the names of these packages have a python prefix or suffix. Such packages are either library extensions or python bindings to an existing library. For instance, dbus-python is a Python language binding for D-Bus. *.pyc/*.pyo files) and compiled extension modules (*.so files) are incompatible between Python 2.4 (used in Red Hat Enterprise Linux 5) and Python 2.6 (used in Red Hat Enterprise Linux 6). As such, you will be required to rebuild any extension modules you use that are not part of Red Hat Enterprise Linux. - What's New in Python 2.5: http://docs.python.org/whatsnew/2.5.html
- What's New in Python 2.6: http://docs.python.org/whatsnew/2.6.html
man python. You can also install python-docs, which provides HTML manuals and references in the following location: file:///usr/share/doc/python-docs-version/html/index.htmlpydoc component_name. For example, pydoc math will display the following information about the math Python module: java interpreter. The java-1.6.0-openjdk-devel package contains the javac compiler, as well as the libraries and header files required for developing Java extensions. man java. Some associated utilities also have their own respective man pages. javadoc suffix (for example, dbus-java-javadoc). ruby package provides the Ruby interpreter and adds support for the Ruby programming language. The ruby-devel package contains the libraries and header files required for developing Ruby extensions. ruby-related packages. By convention, the names of these packages have a ruby or rubygem prefix or suffix. Such packages are either library extensions or Ruby bindings to an existing library. - rubygem-flexmock
- ruby-irb
- ruby-libs
- ruby-rdoc
- ruby-saslwrapper
- ruby-tcltk
file:///usr/share/doc/ruby-version/NEWS-version
man ruby. You can also install ruby-docs, which provides HTML manuals and references in the following location: perl package adds support for the Perl programming language. This package provides Perl core modules, the Perl Language Interpreter, and the PerlDoc tool. perl-* prefix. These modules provide stand-alone applications, language extensions, Perl libraries, and external library bindings. - Experimental APIs allow Perl to be extended with 'pluggable' keywords and syntax.
- Perl will be able to keep accurate time well past the 'Y2038' barrier.
- Package version numbers can be directly specified in 'package' statements.
- Perl warns the user about the use of depreciated features by default.
- Improved support for IPv6.
- A new /r flag that makes s/// substitutions non-destructive.
- New regular expression flags to control whether matched strings should be treated as ASCII or Unicode.
- Less memory and CPU usage than previous releases.
$$variable is writable.- Accessing Unicode database files directly is now depreciated; use Unicode::UCD instead.
- Version::Requirements is depreciated in favor of CPAN::Meta::Requirements.
- assert.pl
- bigint.pl
- cacheout.pl
- ctime.pl
- exceptions.pl
- flush.pl
- getopt.pl
- hostname.pl
- lib/find{,depth}.pl
- newgetopt.pl
- open3.pl
- hellwords.pl
- tainted.pl
- timelocal.pl
The Perl 5.16 delta can be accessed at http://perldoc.perl.org/perl5160delta.html.
yum or rpm from the Red Hat Enterprise Linux repositories. They are installed to /usr/share/perl5 and either /usr/lib/perl5 for 32bit architectures or /usr/lib64/perl5 for 64bit architectures. cpan tool provided by the perl-CPAN package to install modules directly from the CPAN website. They are installed to /usr/local/share/perl5 and either /usr/local/lib/perl5 for 32bit architectures or /usr/local/lib64/perl5 for 64bit architectures. /usr/share/perl5/vendor_perl and either /usr/lib/perl5/vendor_perl for 32bit architectures or /usr/lib64/perl5/vendor_perl for 64bit architectures. /usr/share/perl5/vendor_perl and either /usr/lib/perl5/vendor_perl for 32bit architectures or /usr/lib64/perl5/vendor_perl for 64bit architectures. /usr/share/man directory. perldoc tool provides documentation on language and core modules. To learn more about a module, use perldoc module_name. For example, perldoc CGI will display the following information about the CGI core module: perldoc -f function_name. For example, perldoc -f split wil display the following information about the split function: gcc and g++), run-time libraries (like libgcc, libstdc++, libgfortran, and libgomp), and miscellaneous other utilities. - Calling conventions. These specify how arguments are passed to functions and how results are returned from functions.
- Register usage conventions. These specify how processor registers are allocated and used.
- Object file formats. These specify the representation of binary object code.
- Size, layout, and alignment of data types. These specify how data is laid out in memory.
- Interfaces provided by the runtime environment. Where the documented semantics do not change from one version to another they must be kept available and use the same name at all times.
- Creation and propagation of exceptions
- Constructors and destructors
- Layout, alignment, and padding of classes and derived classes
- Virtual function implementation details, such as the layout and alignment of virtual tables
The following is a list of known incompatibilities between the Red Hat Enterprise Linux 6 and 5 toolchains.
- Passing/returning structs with flexible array members by value changed in some cases on Intel 64 and AMD64.
- Passing/returning of unions with long double members by value changed in some cases on Intel 64 and AMD64.
- Passing/returning structs with complex float member by value changed in some cases on Intel 64 and AMD64.
- Passing of 256-bit vectors on x86, Intel 64 and AMD64 platforms changed when
-mavxis used. - There have been multiple changes in passing of _Decimal{32,64,128} types and aggregates containing those by value on several targets.
- Packing of packed char bitfields changed in some cases.
The following is a list of known incompatibilities between the Red Hat Enterprise Linux 5 and 4 toolchains.
- There have been changes in the library interface specified by the C++ ABI for thread-safe initialization of function-scope static variables.
- On Intel 64 and AMD64, the medium model for building applications where data segment exceeds 4GB, was redesigned to match the latest ABI draft at the time. The ABI change results in incompatibility among medium model objects.
-Wabi can be used to get diagnostics indicating where these constructs appear in source code, though it will not catch every single case. This flag is especially useful for C++ code to warn whenever the compiler generates code that is known to be incompatible with the vendor-neutral C++ ABI. -fabi-version=1 option. This practice is not recommended. Objects created this way are indistinguishable from objects conforming to the current stable ABI, and can be linked (incorrectly) amongst the different ABIs, especially when using new compilers to generate code to be linked with old libraries that were built with tools prior to Red Hat Enterprise Linux 4. ld (distributed as part of the binutils package) or in the dynamic loader (ld.so, distributed as part of the glibc package) can subtly change the object files that the compiler produces. These changes mean that object files moving to the current release of Red Hat Enterprise Linux from previous releases may lose functionality, behave differently at runtime, or otherwise interoperate in a diminished capacity. Known problem areas include: - In Red Hat Enterprise Linux 6 this is passed to
ldby default, whereas Red Hat Enterprise Linux 5lddoesn't recognize it. - In Red Hat Enterprise Linux 6 this directive allows
.debug_frame,.eh_frameor both to be omitted from.cfi*directives. In Red Hat Enterprise Linux 5 only.eh_frameis omitted. as,ld,ld.so, andgdbSTB_GNU_UNIQUEand%gnu_unique_symbolsupportIn Red Hat Enterprise Linux 6 more debug information is generated and stored in object files. This information relies on new features detailed in theDWARFstandard, and also on new extensions not yet standardized. In Red Hat Enterprise Linux 5, tools likeas,ld,gdb,objdump, andreadelfmay not be prepared for this new information and may fail to interoperate with objects created with the newer tools. In addition, Red Hat Enterprise Linux 5 produced object files do not support these new features; these object files may be handled by Red Hat Enterprise Linux 6 tools in a sub-optimal manner.An outgrowth of this enhanced debug information is that the debuginfo packages that ship with system libraries allow you to do useful source level debugging into system libraries if they are installed. See Section 4.2, “Installing Debuginfo Packages” for more information on debuginfo packages.
prelink. gcc command. This is the main driver for the compiler. It can be used from the command line to pre-process or compile a source file, link object files and libraries, or perform a combination thereof. By default, gcc takes care of the details and links in the provided libgcc library. Example 3.1. hello.c
Procedure 3.1. Compiling a 'Hello World' C Program
- Compile Example 3.1, “hello.c” into an executable with:Ensure that the resulting binary
hellois in the same directory ashello.c.
Example 3.2. hello.cc
Procedure 3.2. Compiling a 'Hello World' C++ Program
- Compile Example 3.2, “hello.cc” into an executable with:Ensure that the resulting binary
hellois in the same directory ashello.cc.
Example 3.3. one.c
Example 3.4. two.c
Procedure 3.3. Compiling a Program with Multiple Source Files
- Compile Example 3.3, “one.c” into an executable with:Ensure that the resulting binary
one.ois in the same directory asone.c. - Compile Example 3.4, “two.c” into an executable with:Ensure that the resulting binary
two.ois in the same directory astwo.c. - Compile the two object files
one.oandtwo.ointo a single executable with:Ensure that the resulting binaryhellois in the same directory asone.oandtwo.o.
It is very important to choose the correct architecture for instruction scheduling. By default GCC produces code optimized for the most common processors, but if the CPU on which your code will run is known, the corresponding -mtune= option to optimize the instruction scheduling, and -march= option to optimize the instruction selection should be used.
-mtune= optimizes instruction scheduling to fit your architecture by tuning everything except the ABI and the available instruction set. This option will not choose particular instructions, but instead will tune your program in such a way that executing on a particular architecture will be optimized. For example, if an Intel Core2 CPU will predominantly be used, choose -mtune=core2. If the wrong choice is made, the program will still run, but not optimally on the given architecture. The architecture on which the program will most likely run should always be chosen. -march= optimizes instruction selection. As such, it is important to choose correctly as choosing incorrectly will cause your program to fail. This option selects the instruction set used when generating code. For example, if the program will be run on an AMD K8 core based CPU, choose -march=k8. Specifying the architecture with this option will imply -mtune=. -mtune= and -march= commands should only be used for tuning and selecting instructions within a given architecture, not to generate code for a different architecture (also known as cross-compiling). For example, this is not to be used to generate PowerPC code from an Intel 64 and AMD64 platform. -march= and -mtune=, see the GCC documentation available here: GCC 4.4.4 Manual: Hardware Models and Configurations The compiler flag -O2 is a good middle of the road option to generate fast code. It produces the best optimized code when the resulting code size is not large. Use this when unsure what would best suit.
-O3 is preferable. This option produces code that is slightly larger but runs faster because of a more frequent inline of functions. This is ideal for floating point intensive code. -Os. This flag also optimizes for size, and produces faster code in situations where a smaller footprint will increase code locality, thereby reducing cache misses. -frecord-gcc-switches when compiling objects. This records the options used to build objects into objects themselves. After an object is built, it determines which set of options were used to build it. The set of options are then recorded in a section called .GCC.command.line within the object and can be examined with the following: 3.1.3.5. Using Profile Feedback to Tune Optimization Heuristics
- Branch prediction
- Inter-procedural constant propagation
Procedure 3.4. Using Profile Feedback
- The application must be instrumented to produce profiling information by compiling it with
-fprofile-generate. - Run the application to accumulate and save the profiling information.
Procedure 3.5. Compiling a Program with Profiling Feedback
- Compile
source.cto include profiling instrumentation: - Run
executableto gather profiling information: - Recompile and optimize
source.cwith profiling information gathered in step one:
-fprofile-dir=DIR where DIR is the preferred output directory. glibc and libgcc, and libstdc++ if the program is a C++ program. On Intel 64 and AMD64, this can be done with: yum install glibc-devel.i686 libgcc.i686 libstdc++-devel.i686db4-devel libraries to build, the 32-bit version of these libraries can be installed with: .i686 suffix on the x86 platform (as opposed to x86-64) specifies a 32-bit version of the given package. For PowerPC architectures, the suffix is ppc (as opposed to ppc64). -m32 option can be passed to the compiler and linker to produce 32-bit executables. Provided the supporting 32-bit libraries are installed on the 64-bit system, this executable will be able to run on both 32-bit systems and 64-bit systems. Procedure 3.6. Compiling a 32-bit Program on a 64-bit Host
- On a 64-bit system, compile
hello.cinto a 64-bit executable with: - Ensure that the resulting executable is a 64-bit binary:The command
fileon a 64-bit executable will includeELF 64-bitin its output, andlddwill list/lib64/libc.so.6as the main C library linked. - On a 64-bit system, compile
hello.cinto a 32-bit executable with: - Ensure that the resulting executable is a 32-bit binary:The command
fileon a 32-bit executable will includeELF 32-bitin its output, andlddwill list/lib/libc.so.6as the main C library linked.
-m32 will in not adapt or convert a program to resolve any issues arising from 32/64-bit incompatibilities. For tips on writing portable code and converting from 32-bits to 64-bits, see the paper entitled Porting to 64-bit GNU/Linux Systems in the Proceedings of the 2003 GCC Developers Summit. man pages for cpp, gcc, g++, gcj, and gfortran. configure script. This script runs prior to builds and creates the top-level Makefiles required to build the application. The configure script may perform tests on the current system, create additional files, or run other directives as per parameters provided by the builder. configure script from an input file (configure.ac, for example) Makefile for a project on a specific system configure.scan), which can be edited to create a final configure.ac to be used by autoconfDevelopment Tools group package. You can install this package group to install the entire Autotools suite, or use yum to install any tools in the suite as you wish. git or mercurial into Eclipse. As such, Autotools projects that use git repositories will be required to be checked out outside the Eclipse workspace. Afterwards, you can specify the source location for such projects in Eclipse. Any repository manipulation (commits, or updates for example) are done via the command line. configure script. This script tests systems for tools, input files, and other features it can use in order to build the project [1]. The configure script generates a Makefile which allows the make tool to build the project based on the system configuration. configure script, first create an input file. Then feed it to an Autotools utility in order to create the configure script. This input file is typically configure.ac or Makefile.am; the former is usually processed by autoconf, while the later is fed to automake. Makefile.am input file is available, the automake utility creates a Makefile template (that is, Makefile. in), which may see information collected at configuration time. For example, the Makefile may have to link to a particular library if and only if that library is already installed. When the configure script runs, automake will use the Makefile. in templates to create a Makefile. configure.ac file is available instead, then autoconf will automatically create the configure script based on the macros invoked by configure.ac. To create a preliminary configure.ac, use the autoscan utility and edit the file accordingly. man pages for autoconf, automake, autoscan and most tools included in the Autotools suite. In addition, the Autotools community provides extensive documentation on autoconf and automake on the following websites: hello program: configure can perform, see the following link: gcc -g is equivalent to gcc -gdwarf-3). DWARF debuginfo includes: - names of all the compiled functions and variables, including their target addresses in binaries
- source files used for compilation, including their source line numbers
gcc -g is the same as gcc -g2. To change the macro information to level three, use gcc -g3. readelf -WS file to see which sections are used in a file. Table 4.1. debuginfo levels
Command | ||
|---|---|---|
Stripped | or | Only the symbols required for runtime linkage with shared libraries are present. |
ELF symbols | Only the names of functions and variables are present, no binding to the source files and no types. | |
DWARF debuginfo with macros | The source file names and line numbers are known, including types. | |
DWARF debuginfo with macros | Similar to gcc -g but the macros are known to GDB. |
gcc -g and its variants to store the information into DWARF. gcc -rdynamic is discouraged. For specific symbols, use gcc -Wl, --dynamic-list=... instead. If gcc -rdynamic is used, the strip command or -s gcc option have no effect. This is because all ELF symbols are kept in the binary for possible runtime linkage with shared libraries. readelf -s file command. readelf -w file command. readelf -wi file is a good verification of debuginfo, compiled within your program. The commands strip file or gcc -s are commonly accidentally executed on the output during various compilation stages of the program. readelf -w file command can also be used to show a special section called .eh_frame with a format and purpose is similar to the DWARF section .debug_frame. The .eh_frame section is used for runtime C++ exception resolution and is present even if -g gcc option was not used. It is kept in the primary RPM and is never present in the debuginfo RPMs. .symtab and .debug_*. Neither .eh_frame, .eh_frame_hdr, nor .dynsym are moved or present in debuginfo RPMs as those sections are needed during program runtime. -debuginfo packages for all architecture-dependent RPMs included in the operating system. A packagename-debuginfo-version-release.architecture.rpm package contains detailed information about the relationship of the package source files and the final installed binary. The debuginfo packages contain both .debug files, which in turn contain DWARF debuginfo and the source files used for compiling the binary packages. gcc compilation option -g for your own programs. The debugging experience is better if no optimizations (gcc option -O, such as -O2) is applied with -g. -debuginfo package of a package (that is, typically packagename-debuginfo), first the machine has to be subscribed to the corresponding Debuginfo channel. For example, for Red Hat Enterprise Server 6, the corresponding channel would be Red Hat Enterprise Linux Server Debuginfo (v. 6). -O2). This means that some variables will be displayed as <optimized out>. Stepping through code will 'jump' a little but a crash can still be analyzed. If some debugging information is missing because of the optimizations, the right variable information can be found by disassembling the code and matching it to the source manually. This is applicable only in exceptional cases and is not suitable for regular debugging. 4.2.1. Installing Debuginfo Packages for Core Files Analysis
ulimit -c unlimited setting is in use when a process crashes, the core file is dumped into the current directory. The core file contains only the memory areas modified by the process from the original state of disk files. In order to perform a full analysis of a crash, a core file is required to have: - the executable binary which has crashed, such as
/usr/sbin/sendmail - all the shared libraries loaded in the binary when it crashed
- .debug files and source files (both stored in debuginfo RPMs) for the executable and all of its loaded libraries
version-release.architecture for all the RPMs involved or the same build of your own compiled binaries is needed. At the time of the crash, the application may have already recompiled or been updated by yum on the disk, rendering the files inappropriate for the core file analysis. - The in-memory address where the specific binary was mapped to (for example,
0x400000in the first line). - The size of the binary (for example,
+0x207000in the first line). - The 160-bit SHA-1 build-id of the binary (for example,
2818b2009547f780a5639c904cded443e564973ein the first line). - The in-memory address where the build-id bytes were stored (for example,
@0x400284in the first line). - The on-disk binary file, if available (for example,
/bin/sleepin the first line). This was found byeu-unstripfor this module. - The on-disk debuginfo file, if available (for example,
/usr/lib/debug/bin/sleep.debug). However, best practice is to use the binary file reference instead. - The shared library name as stored in the shared library list in the core file (for example,
libc.so.6in the third line).
ab/cdef0123456789012345678901234567890123) a symbolic link is included in its debuginfo RPM. Using the /bin/sleep executable above as an example, the coreutils-debuginfo RPM contains, among other files: name-debuginfo-version-release.rpm package; it only knows the build-id. In such cases, GDB suggests a different command: rpm -q packagename packagename-debuginforpm -V packagename packagename-debuginforpm -qi packagename packagename-debuginfo/usr/bin/createrepo. - Inspect and modify memory within the code being debugged (for example, reading and setting variables).
- Control the execution state of the code being debugged, principally whether it's running or stopped.
- Detect the execution of particular sections of code (for example, stop running code when it reaches a specified area of interest to the programmer).
- Detect access to particular areas of memory (for example, stop running code when it accesses a specified variable).
- Execute portions of code (from an otherwise stopped program) in a controlled manner.
- Detect various programmatic asynchronous events such as signals.
- The nature of the variable
-g flag. br (breakpoint)r (run)run command starts the execution of the program. If run is executed with any arguments, those arguments are passed on to the executable as if the program has been started normally. Users normally issue this command after setting breakpoints. p (print)print command displays the value of the argument given, and that argument can be almost anything relevant to the program. Usually, the argument is the name of a variable of any complexity, from a simple single value to a structure. An argument can also be an expression valid in the current language, including the use of program variables and library functions, or functions defined in the program being tested. bt (backtrace)backtrace displays the chain of function calls used up until the execution was terminated. This is useful for investigating serious bugs (such as segmentation faults) with elusive causes. l (list)list command shows the line in the source code corresponding to where the program stopped. c (continue)continue command restarts the execution of the program, which will continue to execute until it encounters a breakpoint, runs into a specified or emergent condition (for example, an error), or terminates. n (next)continue, the next command also restarts execution; however, in addition to the stopping conditions implicit in the continue command, next will also halt execution at the next sequential line of code in the current source file. s (step)next, the step command also halts execution at each sequential line of code in the current source file. However, if execution is currently stopped at a source line containing a function call, GDB stops execution after entering the function call (rather than executing it). fini (finish)finish command resumes executions, but halts when execution returns from a function. q (quit)h (help)help command provides access to its extensive internal documentation. The command takes arguments: help breakpoint (or h br), for example, shows a detailed description of the breakpoint command. See the help output of each command for more detailed information. Procedure 4.1. Debugging a 'Hello World' Program
- Compile hello.c into an executable with the debug flag set, as in:Ensure that the resulting binary
hellois in the same directory ashello.c. - Run
gdbon thehellobinary, that is,gdb hello. - After several introductory comments,
gdbwill display the default GDB prompt: - The variable
hellois global, so it can be seen even before themainprocedure starts:Note that theprinttargetshello[0]and*hellorequire the evaluation of an expression, as does, for example,*(hello + 1): - Next, list the source:The
listreveals that thefprintfcall is on line 8. Apply a breakpoint on that line and resume the code: - Finally, use the
nextcommand to step past thefprintfcall, executing it:
continue command thousands of times just to get to the iteration that crashed. info br) to review the breakpoint status: set follow-fork-mode feature is used to overcome this barrier allowing programmers to follow a a child process instead of the parent process. set follow-fork-mode parentset follow-fork-mode childshow follow-fork-modeset detach-on-fork command to debug both the parent and the child processes after a fork, or retain debugger control over them both. set detach-on-fork onfollow-fork-mode) will be detached and allowed to run independently. This is the default. set detach-on-fork offfollow-fork-mode) is debugged as usual, while the other is suspended. show detach-on-forkgcc -g fork.c -o fork -lpthread and examined under GDB will show: set follow-fork-mode child. .gdbinit. set follow-fork-mode ask is added to ~/.gdbinit, then ask mode becomes the default mode. set non-stop on and set target-async on. These can be added to .gdbinit. Once that functionality is turned on, GDB is ready to conduct thread debugging. info threads provides a summary of the program's threads and some details about their current state. In this case there is only one thread that has been created so far. thread <thread number> command to switch the focus to another thread. continue. continue. This allows the GDB prompt to return so other commands can be executed. Using the interrupt, execution can be stopped should thread 3 become interesting again. gcc -O2 -g built) code. It also displays the <optimized out> message less. gcc -O -g or, more commonly, gcc -O2 -g). To disable VTA during such builds, add the -fno-var-tracking-assignments. In addition, the VTA infrastructure includes the new gcc option -fcompare-debug. This option tests code compiled by GCC with debug information and without debug information: the test passes if the two binaries are identical. This test ensures that executable code is not affected by any debugging options, which further ensures that there are no hidden bugs in the debug code. Note that -fcompare-debug adds significant cost in compilation time. See man gcc for details about this option. print outputs comprehensive debugging information for a target application. GDB aims to provide as much debugging data as it can to users; however, this means that for highly complex programs the amount of data can become very cryptic. print output. GDB does not even empower users to easily create tools that can help decipher program data. This makes the practice of reading and understanding debugging data quite arcane, particularly for large, complex projects. print output (and make it more meaningful) is to revise and recompile GDB. However, very few developers can actually do this. Further, this practice will not scale well, particularly if the developer must also debug other programs that are heterogeneous and contain equally complex debugging data. 
To pass program data to a set of registered Python pretty-printers, the GDB development team added hooks to the GDB printing code. These hooks were implemented with safety in mind: the built-in GDB printing code is still intact, allowing it to serve as a default fallback printing logic. As such, if no specialized printers are available, GDB will still print debugging data the way it always did. This ensures that GDB is backwards-compatible; users who do not require pretty-printers can still continue using GDB.
This new 'Python-scripted' approach allows users to distill as much knowledge as required into specific printers. As such, a project can have an entire library of printer scripts that parses program data in a unique manner specific to its user's requirements. There is no limit to the number of printers a user can build for a specific project; what's more, being able to customize debugging data script by script offers users an easier way to re-use and re-purpose printer scripts — or even a whole library of them.
The best part about this approach is its lower barrier to entry. Python scripting is comparatively easy to learn and has a large library of free documentation available online. In addition, most programmers already have basic to intermediate experience in Python scripting, or in scripting in general.
g++ -g fruit.cc -o fruit. Now, examine this program with GDB. {fruit = 1} is correct because that is the internal representation of 'fruit' in the data structure 'Fruit'. However, this is not easily read by humans as it is difficult to tell which fruit the integer 1 represents. gdb.pretty_printers.append (lookup_type) adds the function lookup_type to GDB's list of printer lookup functions. lookup_type is responsible for examining the type of object to be printed, and returning an appropriate pretty printer. The object is passed by GDB in the parameter val. val.type is an attribute which represents the type of the pretty printer. FruitPrinter is where the actual work is done. More specifically in the to_string function of that Class. In this function, the integer fruit is retrieved using the python dictionary syntax self.val['fruit']. Then the name is determined using that value. The string returned by this function is the string that will be printed to the user. fruit.py, it must then be loaded into GDB with the following command: perf, and SystemTap) to collect profiling data. Each tool is suitable for performing specific types of profile runs, as described in the following sections. malloc, new, free, and delete. memcheck is perhaps the most used Valgrind tool, as memory management problems can be difficult to detect using other means. Such problems often remain undetected for long periods, eventually causing crashes that are difficult to diagnose. cachegrind, callgrind can model cache behavior. However, the main purpose of callgrind is to record callgraphs data for the executed code. - Potential deadlocks arising from lock ordering problems
- Data races (that is, accessing memory without adequate locking)
lackey tool, which is a sample that can be used as a template for generating your own tools. toolname. In addition to the suite of Valgrind tools, none is also a valid argument for toolname; this argument allows you to run a program under Valgrind without performing any profiling. This is useful for debugging or benchmarking Valgrind itself. --log-file=filename. For example, to check the memory usage of the executable file hello and send profile information to output, use: man valgrind. Red Hat Enterprise Linux also provides a comprehensive Valgrind Documentation book available as PDF and HTML in: /usr/share/doc/valgrind-version/valgrind_manual.pdf
opcontrol tool and the new operf tool are mutually exclusive. opcontrol. The operf tool uses the Linux Performance Events subsystem, allowing you to target your profiling more precisely, as a single process or system-wide, and allowing OProfile to co-exist better with other tools using the performance monitoring hardware on your system. Unlike opcontrol, no initial setup is required, and it can be used without the root privileges unless the --system-wide option is in use. oprofiled) and configure a profile session. opcontrol, oprofiled, and post-processing tools) remains available, but it is no longer the recommended profiling method. For a detailed description of the legacy mode, see the Configuring OProfile Using Legacy Mode chapter in the System Administrator's Guide. operf is the recommended tool for collecting profiling data. The tool does not require any initial configuration, and all options are passed to it on the command line. Unlike the legacy opcontrol tool, operf can run without root privileges. See the Using operf chapter in the System Administrator's Guide for detailed instructions on how to use the operf tool. Example 5.1. Using operf to Profile a Java Program
operf tool is used to collect profiling data from a Java (JIT) program, and the opreport tool is then used to output per-symbol data. - Install the demonstration Java program used in this example. It is a part of the java-1.8.0-openjdk-demo package, which is included in the Optional channel. See Enabling Supplementary and Optional Repositories for instructions on how to use the Optional channel. When the Optional channel is enabled, install the package:
- Install the oprofile-jit package for OProfile to be able to collect profiling data from Java programs:
- Change into the directory with the demonstration program:
- Change into the home directory and analyze the collected data:
operf command. operf program is now available that allows non-root users to profile single processes. This can also be used for system-wide profiling, but in this case, root authority is required. - Intel Haswell processors
- AMD Generic Performance Events
- AMD Instruction Based Sampling (IBS) is not currently supported with the new
operfprogram. Use the legacyopcontrolcommands for IBS profiling. - The type of the sample header
mtimefield has changed to u64, which makes it impossible to process sample data acquired using previous versions of OProfile. opcontrolfails to allocate the hardware performance counters it needs if the NMI watchdog is enabled. The NMI watchdog, which monitors system interrupts, uses theperftool, which reserves all performance counters.
file:///usr/share/doc/oprofile-version/: file:///usr/share/doc/oprofile-version/oprofile.htmlfile:///usr/share/doc/oprofile-version/internals.html- Write SystemTap scripts that specify which system events (for example, virtual file system reads, packet transmissions) should trigger specified actions (for example, print, parse, or otherwise manipulate data).
- SystemTap translates the script into a C program, which it compiles into a kernel module.
- SystemTap loads the kernel module to perform the actual probe.
- kernel-variant-debuginfo-version
5.4. Performance Counters for Linux (PCL) Tools and perf
perf to analyze the collected performance data. perf commands include the following: perf command provides overall statistics for common performance events, including instructions executed and clock cycles consumed. Options allow selection of events other than the default measurement events. perf command records performance data into a file which can be later analyzed using perf report. perf command reads the performance data from a file and analyzes the recorded data. 
perf command lists the events available on a particular machine. These events will vary based on the performance monitoring hardware and the software configuration of the system. perf help to obtain a complete list of perf commands. To retrieve man page information on each perf command, use perf help command. make and its children, use the following command: perf command collects a number of different hardware and software counters. It then prints the following information: perf tool can also record samples. For example, to record data on the make command and its children, use: {} group syntax has been added that allows the creation of event groups based on the way they are specified on the command line. --group or -g options remain the same; if it is specified for record, stat, or top command, all the specified events become members of a single group with the first event as a group leader. {} group syntax allows the creation of a group like: :kp modifier being used for faults, and the :p modifier being used for the cache-references event. Both OProfile and Performance Counters for Linux (PCL) use the same hardware Performance Monitoring Unit (PMU). If OProfile is currently running while attempting to use the PCL perf command, an error message like the following occurs when starting OProfile:
perf command, first shut down OProfile: perf.data to determine the relative frequency of samples. The report output includes the command, object, and function for the samples. Use perf report to output an analysis of perf.data. For example, the following command produces a report of the executable that consumes the most time: make spends most of this time in xsltproc and the pdfxmltex. To reduce the time for the make to complete, focus on xsltproc and pdfxmltex. To list the functions executed by xsltproc, run: ftrace framework provides users with several tracing capabilities, accessible through an interface much simpler than SystemTap's. This framework uses a set of virtual files in the debugfs file system; these files enable specific tracers. The ftrace function tracer outputs each function called in the kernel in real time; other tracers within the ftrace framework can also be used to analyze wakeup latency, task switches, kernel events, and the like. ftrace, making it a flexible solution for analyzing kernel events. The ftrace framework is useful for debugging or analyzing latencies and performance issues that take place outside of user-space. Unlike other profilers documented in this guide, ftrace is a built-in feature of the kernel. CONFIG_FTRACE=y option. This option provides the interfaces required by ftrace. To use ftrace, mount the debugfs file system as follows: ftrace utilities are located in /sys/kernel/debug/tracing/. View the /sys/kernel/debug/tracing/available_tracers file to find out what tracers are available for your kernel: /sys/kernel/debug/tracing/current_tracer. For example, wakeup traces and records the maximum time it takes for the highest-priority task to be scheduled after the task wakes up. To use it: echo wakeup > /sys/kernel/debug/tracing/current_tracer/sys/kernel/debug/tracing/tracing_on, as in: echo 1 > /sys/kernel/debug/tracing/tracing_on (enables tracing) echo 0 > /sys/kernel/debug/tracing/tracing_on (disables tracing) /sys/kernel/debug/tracing/trace, but is meant to be piped into a command. Unlike /sys/kernel/debug/tracing/trace, reading from this file consumes its output. ftrace framework is fully documented in the following files: - ftrace - Function Tracer:
file:///usr/share/doc/kernel-doc-version/Documentation/trace/ftrace.txt - function tracer guts:
file:///usr/share/doc/kernel-doc-version/Documentation/trace/ftrace-design.txt
- PostScript
- Compressed HTML
- C++
- Objective -C
- Java
- PHP
- Fortran
doxygen -g config-file. This creates a template configuration file that can be easily edited. The variable config-file is the name of the configuration file. If it is committed from the command it is called Doxyfile by default. Another useful option while creating the configuration file is the use of a minus sign (-) as the file name. This is useful for scripting as it will cause Doxygen to attempt to read the configuration file from standard input (stdin). doxywizard. If this is the preferred method of editing then documentation for this function can be found on the Doxywizard usage page of the Doxygen documentation website. For small projects consisting mainly of C or C++ source and header files it is not required to change anything. However, if the project is large and consists of a source directory or tree, then assign the root directory or directories to the INPUT tag.
File patterns (for example, *.cpp or *.h) can be added to this tag allowing only files that match one of the patterns to be parsed.
Setting this to yes will allow recursive parsing of a source tree.
These are used to further fine-tune the files that are parsed by adding file patterns to avoid. For example, to omit all test directories from a source tree, use EXCLUDE_PATTERNS = */test/*.
When this is set to yes, doxygen will pretend that everything in the source files is documented to give an idea of how a fully documented project would look. However, warnings regarding undocumented members will not be generated in this mode; set it back to no when finished to correct this.
By setting the SOURCE_BROWSER tag to yes doxygen will generate a cross-reference to analyze a piece of software's definition in its source files with the documentation existing about it. These sources can also be included in the documentation by setting INLINE_SOURCES to yes.
doxygen config-file creates html, rtf, latex, xml, and / or man directories in whichever directory doxygen is started in, containing the documentation for the corresponding filetype. This documentation can be viewed with a HTML browser that supports cascading style sheets (CSS), as well as DHTML and Javascript for some sections. Point the browser (for example, Mozilla, Safari, Konqueror, or Internet Explorer 6) to the index.html in the html directory.
Doxygen writes a Makefile into the latex directory in order to make it easy to first compile the Latex documentation. To do this, use a recent teTeX distribution. What is contained in this directory depends on whether the USE_PDFLATEX is set to no. Where this is true, typing make while in the latex directory generates refman.dvi. This can then be viewed with xdvi or converted to refman.ps by typing make ps. Note that this requires dvips.
make ps_2on1 prints two pages on one physical page. It is also possible to convert to a PDF if a ghostscript interpreter is installed by using the command make pdf. Another valid command is make pdf_2on1. When doing this set PDF_HYPERLINKS and USE_PDFLATEX tags to yes as this will cause Makefile will only contain a target to build refman.pdf directly. This file is designed to import into Microsoft Word by combining the RTF output into a single file: refman.rtf. Some information is encoded using fields but this can be shown by selecting all (CTRL+A or Edit -> select all) and then right-click and select the toggle fields option from the drop down menu.
The output into the xml directory consists of a number of files, each compound gathered by doxygen, as well as an index.xml. An XSLT script, combine.xslt, is also created that is used to combine all the XML files into a single file. Along with this, two XML schema files are created, index.xsd for the index file, and compound.xsd for the compound files, which describe the possible elements, their attributes, and how they are structured.
The documentation from the man directory can be viewed with the man program after ensuring the manpath has the correct man directory in the man path. Be aware that due to limitations with the man page format, information such as diagrams, cross-references and formulas will be lost.
- First, ensure that
EXTRACT_ALLis set tonoso warnings are correctly generated and documentation is built properly. This allows doxygen to create documentation for documented members, files, classes and namespaces. - There are two ways this documentation can be created:
- A special documentation block
- This comment block, containing additional marking so Doxygen knows it is part of the documentation, is in either C or C++. It consists of a brief description, or a detailed description. Both of these are optional. What is not optional, however, is the in body description. This then links together all the comment blocks found in the body of the method or function.While more than one brief or detailed description is allowed, this is not recommended as the order is not specified.The following will detail the ways in which a comment block can be marked as a detailed description:
- C-style comment block, starting with two asterisks (*) in the JavaDoc style.
- C-style comment block using the Qt style, consisting of an exclamation mark (!) instead of an extra asterisk.
- The beginning asterisks on the documentation lines can be left out in both cases if that is preferred.
- A blank beginning and end line in C++ also acceptable, with either three forward slashes or two forward slashes and an exclamation mark.
- Alternatively, in order to make the comment blocks more visible a line of asterisks or forward slashes can be used.Note that the two forwards slashes at the end of the normal comment block start a special comment block.
There are three ways to add a brief description to documentation.- To add a brief description use
briefabove one of the comment blocks. This brief section ends at the end of the paragraph and any further paragraphs are the detailed descriptions. - By setting
JAVADOC_AUTOBRIEFtoyes, the brief description will only last until the first dot followed by a space or new line. Consequentially limiting the brief description to a single sentence.This can also be used with the above mentioned three-slash comment blocks (///). - The third option is to use a special C++ style comment, ensuring this does not span more than one line.The blank line in the above example is required to separate the brief description and the detailed description, and
JAVADOC_AUTOBRIEFmust to be set tono.
Examples of how a documented piece of C++ code using the Qt style can be found on the Doxygen documentation websiteIt is also possible to have the documentation after members of a file, struct, union, class, or enum. To do this add a < marker in the comment block.orFor brief descriptions after a member use:Examples of these and how the HTML is produced can be viewed on the Doxygen documentation website - Documentation at other places
- While it is preferable to place documentation in front of the code it is documenting, at times it is only possible to put it in a different location, especially if a file is to be documented; after all it is impossible to place the documentation in front of a file. This is best avoided unless it is absolutely necessary as it can lead to some duplication of information.To do this it is important to have a structural command inside the documentation block. Structural commands start with a backslash () or an at-sign (@) for JavaDoc and are followed by one or more parameters.In the above example the command
classis used. This indicates that the comment block contains documentation for the class 'Test'. Others are:union: document a unionfn: document a functionvar: document a variable, typedef, or enum valuetypedef: document a type definitionnamespace: document a namespaceinterface: document an IDL interface
- Next, the contents of a special documentation block is parsed before being written to the HTML and / Latex output directories. This includes:
- Any white space and asterisks (*) are removed.
- Words are linked to their corresponding documentation. Where the word is preceded by a percent sign (%) the percent sign is removed and the word remains.
- Where certain patterns are found in the text, links to members are created. Examples of this can be found on the automatic link generation page on the Doxygen documentation website.
- When the documentation is for Latex, HTML tags are interpreted and converted to Latex equivalents. A list of supported HTML tags can be found on the HTML commands page on the Doxygen documentation website.
mallopt is a library call that allows a program to change the behavior of the malloc memory allocator. Example A.1. Allocator heuristics
mmap. For the later, it attempts to allocate with sbrk. M_MMAP_THRESHOLD. mallopt interface. mallopt allows the developer to override those limits. Example A.2. mallopt
- M_TRIM_THRESHOLD
- M_MMAP_THRESHOLD
- M_CHECK_ACTION
- M_ARENA_TEST
malloc_trim is a library call that requests the allocator return any unused memory back to the operating system. This is normally automatic when an object is freed. However, in some cases when freeing small objects, glibc might not immediately release the memory back to the operating system. It does this so that the free memory can be used to satisfy upcoming memory allocation requests as it is expensive to allocate from and release memory back to the operating system. malloc_stats is used to dump information about the allocator's internal state to stderr. Using mallinfo is similar to this, but it places the state into a structure instead. mallopt can be found at http://www.makelinux.net/man/3/M/mallopt and http://www.gnu.org/software/libc/manual/html_node/Malloc-Tunable-Parameters.html. | Revision History | |||
|---|---|---|---|
| Revision 6-9.3 | Thu 25 May 2017 | ||
| |||
| Revision 6-9.2 | Mon 3 April 2017 | ||
| |||
| Revision 2-60 | Wed 4 May 2016 | ||
| |||
| Revision 2-56 | Tue Jul 6 2015 | ||
| |||
| Revision 2-55 | Wed Apr 15 2015 | ||
| |||
| Revision 2-54 | Tue Dec 16 2014 | ||
| |||
| Revision 2-52 | Wed Nov 11 2014 | ||
| |||
| Revision 2-51 | Fri Oct 10 2014 | ||
| |||
A
- advantages
- Python pretty-printers
- debugging, Python Pretty-Printers
- Akonadi
- KDE Development Framework
- libraries and runtime support, KDE4 Architecture
- architecture, KDE4
- KDE Development Framework
- libraries and runtime support, KDE4 Architecture
- Autotools
- compiling and building, Autotools

B
- backtrace
- tools
- GNU debugger, Simple GDB
- Boost
- libraries and runtime support, Boost
- boost-doc
- Boost
- libraries and runtime support, Additional Information
- breakpoint
- fundamentals
- GNU debugger, Simple GDB
- breakpoints (conditional)
- GNU debugger, Conditional Breakpoints
- build-id
- compiling and building, build-id Unique Identification of Binaries
- building
- compiling and building, Compiling and Building
C
- C++ Standard Library, GNU
- libraries and runtime support, The GNU C++ Standard Library
- cachegrind
- tools
- Valgrind, Valgrind Tools
- callgrind
- tools
- Valgrind, Valgrind Tools
- Collaborating, Collaborating
- commands
- fundamentals
- GNU debugger, Simple GDB
- profiling
- Valgrind, Valgrind Tools
- tools
- Performance Counters for Linux (PCL) and perf, Perf Tool Commands
- commonly-used commands
- Autotools
- compiling and building, Autotools
- compatibility
- libraries and runtime support, Compatibility
- compiling a C Hello World program
- usage
- GCC, Simple C Usage
- compiling a C++ Hello World program
- usage
- GCC, Simple C++ Usage
- compiling and building
- Autotools, Autotools
- commonly-used commands, Autotools
- configuration script, Configuration Script
- documentation, Autotools Documentation
- plug-in for Eclipse, Autotools Plug-in for Eclipse
- templates (supported), Autotools Plug-in for Eclipse
- build-id, build-id Unique Identification of Binaries
- GNU Compiler Collection, GNU Compiler Collection (GCC)
- documentation, GCC Documentation
- required packages, Running GCC
- usage, Running GCC
- introduction, Compiling and Building
- conditional breakpoints
- GNU debugger, Conditional Breakpoints
- configuration script
- Autotools
- compiling and building, Configuration Script
- continue
- tools
- GNU debugger, Simple GDB
D
- debugfs file system
- profiling
- ftrace, ftrace
- debugging
- debuginfo-packages, Installing Debuginfo Packages
- installation, Installing Debuginfo Packages
- GNU debugger, GDB
- fundamental mechanisms, GDB
- GDB, GDB
- requirements, GDB
- introduction, Debugging
- Python pretty-printers, Python Pretty-Printers
- advantages, Python Pretty-Printers
- debugging output (formatted), Python Pretty-Printers
- documentation, Python Pretty-Printers
- pretty-printers, Python Pretty-Printers
- variable tracking at assignments (VTA), Variable Tracking at Assignments
- debugging a Hello World program
- usage
- GNU debugger, Running GDB
- debugging output (formatted)
- Python pretty-printers
- debugging, Python Pretty-Printers
- debuginfo-packages
- debugging, Installing Debuginfo Packages
- documentation
- Autotools
- compiling and building, Autotools Documentation
- Boost
- libraries and runtime support, Additional Information
- GNU C++ Standard Library
- libraries and runtime support, Additional information
- GNU Compiler Collection
- compiling and building, GCC Documentation
- Java
- libraries and runtime support, Java Documentation
- KDE Development Framework
- libraries and runtime support, kdelibs Documentation
- OProfile
- profiling, OProfile Documentation
- Perl
- libraries and runtime support, Perl Documentation
- profiling
- ftrace, ftrace Documentation
- Python
- libraries and runtime support, Python Documentation
- Python pretty-printers
- debugging, Python Pretty-Printers
- Qt
- libraries and runtime support, Qt Library Documentation
- Ruby
- libraries and runtime support, Ruby Documentation
- SystemTap
- profiling, Additional Information
- Valgrind
- profiling, Additional information
- Documentation
- Doxygen, Doxygen
- Docment sources, Documenting the Sources
- Getting Started, Getting Started
- Resources, Resources
- Running Doxygen, Running Doxygen
- Supported output and languages, Doxygen Supported Output and Languages
- Documentation Tools, Documentation Tools
- Doxygen
- Documentation, Doxygen
- document sources, Documenting the Sources
- Getting Started, Getting Started
- Resources, Resources
- Running Doxygen, Running Doxygen
- Supported output and languages, Doxygen Supported Output and Languages
F
- finish
- tools
- GNU debugger, Simple GDB
- forked execution
- GNU debugger, Forked Execution
- formatted debugging output
- Python pretty-printers
- debugging, Python Pretty-Printers
- framework (ftrace)
- profiling
- ftrace, ftrace
- ftrace
- profiling, ftrace
- debugfs file system, ftrace
- documentation, ftrace Documentation
- framework (ftrace), ftrace
- usage, Using ftrace
- function tracer
- profiling
- ftrace, ftrace
- fundamental commands
- fundamentals
- GNU debugger, Simple GDB
- fundamental mechanisms
- GNU debugger
- debugging, GDB
- fundamentals
- GNU debugger, Simple GDB
G
- gcc
- GNU Compiler Collection
- compiling and building, GNU Compiler Collection (GCC)
- GCC C
- usage
- compiling a C Hello World program, Simple C Usage
- GCC C++
- usage
- compiling a C++ Hello World program, Simple C++ Usage
- GDB
- GNU debugger
- debugging, GDB
- Git
- configuration, Installing and Configuring Git
- documentation, Additional Resources
- installation, Installing and Configuring Git
- overview, Git
- usage, Creating a New Repository
- GNOME Power Manager
- libraries and runtime support, GNOME Power Manager
- gnome-power-manager
- GNOME Power Manager
- libraries and runtime support, GNOME Power Manager
- GNU C++ Standard Library
- libraries and runtime support, The GNU C++ Standard Library
- GNU Compiler Collection
- compiling and building, GNU Compiler Collection (GCC)
- GNU debugger
- conditional breakpoints, Conditional Breakpoints
- debugging, GDB
- execution (forked), Forked Execution
- forked execution, Forked Execution
- fundamentals, Simple GDB
- breakpoint, Simple GDB
- commands, Simple GDB
- halting an executable, Simple GDB
- inspecting the state of an executable, Simple GDB
- starting an executable, Simple GDB
- interfaces (CLI and machine), Alternative User Interfaces for GDB
- thread and threaded debugging, Debugging Individual Threads
- tools, Simple GDB
- backtrace, Simple GDB
- continue, Simple GDB
- finish, Simple GDB
- help, Simple GDB
- list, Simple GDB
- next, Simple GDB
- print, Simple GDB
- quit, Simple GDB
- step, Simple GDB
- usage, Running GDB
- debugging a Hello World program, Running GDB
- variations and environments, Alternative User Interfaces for GDB
H
- halting an executable
- fundamentals
- GNU debugger, Simple GDB
- helgrind
- tools
- Valgrind, Valgrind Tools
- help
- tools
- GNU debugger, Simple GDB
I
- inspecting the state of an executable
- fundamentals
- GNU debugger, Simple GDB
- installation
- debuginfo-packages
- debugging, Installing Debuginfo Packages
- interfaces (CLI and machine)
- GNU debugger, Alternative User Interfaces for GDB
- introduction
- compiling and building, Compiling and Building
- debugging, Debugging
- libraries and runtime support, Libraries and Runtime Support
- profiling, Profiling
- SystemTap, SystemTap
- ISO 14482 Standard C++ library
- GNU C++ Standard Library
- libraries and runtime support, The GNU C++ Standard Library
K
- KDE Development Framework
- libraries and runtime support, KDE Development Framework
- KDE4 architecture
- KDE Development Framework
- libraries and runtime support, KDE4 Architecture
- kdelibs-devel
- KDE Development Framework
- libraries and runtime support, KDE Development Framework
- kernel information packages
- profiling
- SystemTap, SystemTap
- KHTML
- KDE Development Framework
- libraries and runtime support, KDE4 Architecture
- KIO
- KDE Development Framework
- libraries and runtime support, KDE4 Architecture
- KJS
- KDE Development Framework
- libraries and runtime support, KDE4 Architecture
- KNewStuff2
- KDE Development Framework
- libraries and runtime support, KDE4 Architecture
- KXMLGUI
- KDE Development Framework
- libraries and runtime support, KDE4 Architecture
L
- libraries
- runtime support, Libraries and Runtime Support
- libraries and runtime support
- Boost, Boost
- boost-doc, Additional Information
- documentation, Additional Information
- message passing interface (MPI), Boost
- meta-package, Boost
- C++ Standard Library, GNU, The GNU C++ Standard Library
- compatibility, Compatibility
- GNOME Power Manager, GNOME Power Manager
- gnome-power-manager, GNOME Power Manager
- GNU C++ Standard Library, The GNU C++ Standard Library
- documentation, Additional information
- ISO 14482 Standard C++ library, The GNU C++ Standard Library
- libstdc++-devel, The GNU C++ Standard Library
- libstdc++-docs, Additional information
- Standard Template Library, The GNU C++ Standard Library
- introduction, Libraries and Runtime Support
- Java, Java
- documentation, Java Documentation
- KDE Development Framework, KDE Development Framework
- Akonadi, KDE4 Architecture
- documentation, kdelibs Documentation
- KDE4 architecture, KDE4 Architecture
- kdelibs-devel, KDE Development Framework
- KHTML, KDE4 Architecture
- KIO, KDE4 Architecture
- KJS, KDE4 Architecture
- KNewStuff2, KDE4 Architecture
- KXMLGUI, KDE4 Architecture
- Phonon, KDE4 Architecture
- Plasma, KDE4 Architecture
- Solid, KDE4 Architecture
- Sonnet, KDE4 Architecture
- Strigi, KDE4 Architecture
- Telepathy, KDE4 Architecture
- libstdc++, The GNU C++ Standard Library
- Perl, Perl
- documentation, Perl Documentation
- module installation, Installation
- updates, Perl Updates
- Python, Python
- documentation, Python Documentation
- updates, Python Updates
- Qt, Qt
- documentation, Qt Library Documentation
- meta object compiler (MOC), Qt
- Qt Creator, Qt Creator
- qt-doc, Qt Library Documentation
- updates, Qt Updates
- widget toolkit, Qt
- Ruby, Ruby
- documentation, Ruby Documentation
- ruby-devel, Ruby
- Library and Runtime Details
- NSS Shared Databases, NSS Shared Databases
- Backwards Compatibility, Backwards Compatibility
- Documentation, NSS Shared Databases Documentation
- libstdc++
- libraries and runtime support, The GNU C++ Standard Library
- libstdc++-devel
- GNU C++ Standard Library
- libraries and runtime support, The GNU C++ Standard Library
- libstdc++-docs
- GNU C++ Standard Library
- libraries and runtime support, Additional information
- list
- tools
- GNU debugger, Simple GDB
- Performance Counters for Linux (PCL) and perf, Perf Tool Commands
M
- machine interface
- GNU debugger, Alternative User Interfaces for GDB
- mallopt, mallopt
- massif
- tools
- Valgrind, Valgrind Tools
- mechanisms
- GNU debugger
- debugging, GDB
- memcheck
- tools
- Valgrind, Valgrind Tools
- message passing interface (MPI)
- Boost
- libraries and runtime support, Boost
- meta object compiler (MOC)
- Qt
- libraries and runtime support, Qt
- meta-package
- Boost
- libraries and runtime support, Boost
- module installation
- Perl
- libraries and runtime support, Installation
N
- next
- tools
- GNU debugger, Simple GDB
- NSS Shared Datagbases
- Library and Runtime Details, NSS Shared Databases
- Backwards Compatibility, Backwards Compatibility
- Documentation, NSS Shared Databases Documentation
O
- OProfile
- profiling, OProfile
- documentation, OProfile Documentation
- usage, Using OProfile
P
- perf
- profiling
- Performance Counters for Linux (PCL) and perf, Performance Counters for Linux (PCL) Tools and perf
- usage
- Performance Counters for Linux (PCL) and perf, Using Perf
- Performance Counters for Linux (PCL) and perf
- profiling, Performance Counters for Linux (PCL) Tools and perf
- subsystem (PCL), Performance Counters for Linux (PCL) Tools and perf
- tools, Perf Tool Commands
- commands, Perf Tool Commands
- list, Perf Tool Commands
- record, Perf Tool Commands
- report, Perf Tool Commands
- stat, Perf Tool Commands
- usage, Using Perf
- perf, Using Perf
- Perl
- libraries and runtime support, Perl
- Phonon
- KDE Development Framework
- libraries and runtime support, KDE4 Architecture
- Plasma
- KDE Development Framework
- libraries and runtime support, KDE4 Architecture
- plug-in for Eclipse
- Autotools
- compiling and building, Autotools Plug-in for Eclipse
- pretty-printers
- Python pretty-printers
- debugging, Python Pretty-Printers
- tools
- GNU debugger, Simple GDB
- profiling
- conflict between perf and oprofile, Using Perf
- ftrace, ftrace
- introduction, Profiling
- OProfile, OProfile
- Performance Counters for Linux (PCL) and perf, Performance Counters for Linux (PCL) Tools and perf
- SystemTap, SystemTap
- Valgrind, Valgrind
- Python
- libraries and runtime support, Python
- Python pretty-printers
- debugging, Python Pretty-Printers
Q
- Qt
- libraries and runtime support, Qt
- Qt Creator
- Qt
- libraries and runtime support, Qt Creator
- qt-doc
- Qt
- libraries and runtime support, Qt Library Documentation
- quit
- tools
- GNU debugger, Simple GDB
R
- record
- tools
- Performance Counters for Linux (PCL) and perf, Perf Tool Commands
- report
- tools
- Performance Counters for Linux (PCL) and perf, Perf Tool Commands
- required packages
- GNU Compiler Collection
- compiling and building, Running GCC
- profiling
- SystemTap, SystemTap
- requirements
- GNU debugger
- debugging, GDB
- Revision control, Collaborating
- Ruby
- libraries and runtime support, Ruby
- ruby-devel
- Ruby
- libraries and runtime support, Ruby
- runtime support
- libraries, Libraries and Runtime Support
S
- scripts (SystemTap scripts)
- profiling
- SystemTap, SystemTap
- Solid
- KDE Development Framework
- libraries and runtime support, KDE4 Architecture
- Sonnet
- KDE Development Framework
- libraries and runtime support, KDE4 Architecture
- Standard Template Library
- GNU C++ Standard Library
- libraries and runtime support, The GNU C++ Standard Library
- starting an executable
- fundamentals
- GNU debugger, Simple GDB
- stat
- tools
- Performance Counters for Linux (PCL) and perf, Perf Tool Commands
- step
- tools
- GNU debugger, Simple GDB
- Strigi
- KDE Development Framework
- libraries and runtime support, KDE4 Architecture
- subsystem (PCL)
- profiling
- Performance Counters for Linux (PCL) and perf, Performance Counters for Linux (PCL) Tools and perf
- supported templates
- Autotools
- compiling and building, Autotools Plug-in for Eclipse
- SystemTap
- profiling, SystemTap
- documentation, Additional Information
- introduction, SystemTap
- kernel information packages, SystemTap
- required packages, SystemTap
- scripts (SystemTap scripts), SystemTap
T
- Telepathy
- KDE Development Framework
- libraries and runtime support, KDE4 Architecture
- templates (supported)
- Autotools
- compiling and building, Autotools Plug-in for Eclipse
- thread and threaded debugging
- GNU debugger, Debugging Individual Threads
- tools
- GNU debugger, Simple GDB
- Performance Counters for Linux (PCL) and perf, Perf Tool Commands
- profiling
- Valgrind, Valgrind Tools
- Valgrind, Valgrind Tools
U
- updates
- Perl
- libraries and runtime support, Perl Updates
- Python
- libraries and runtime support, Python Updates
- Qt
- libraries and runtime support, Qt Updates
- usage
- GNU Compiler Collection
- compiling and building, Running GCC
- GNU debugger, Running GDB
- fundamentals, Simple GDB
- Performance Counters for Linux (PCL) and perf, Using Perf
- profiling
- ftrace, Using ftrace
- OProfile, Using OProfile
- Valgrind
- profiling, Using Valgrind
V
- Valgrind
- profiling, Valgrind
- commands, Valgrind Tools
- documentation, Additional information
- tools, Valgrind Tools
- usage, Using Valgrind
- tools
- cachegrind, Valgrind Tools
- callgrind, Valgrind Tools
- helgrind, Valgrind Tools
- massif, Valgrind Tools
- memcheck, Valgrind Tools
- variable tracking at assignments (VTA)
- debugging, Variable Tracking at Assignments
- variations and environments
- GNU debugger, Alternative User Interfaces for GDB
- Version control, Collaborating
W
- widget toolkit
- Qt
- libraries and runtime support, Qt