Move documents to github.io

Will be pulled into release tarball by a release script.

Signed-off-by: Alexey Neyman <stilor@att.net>
This commit is contained in:
Alexey Neyman 2017-03-20 00:10:26 -07:00
parent 6f226b5efe
commit 13f47ef576
15 changed files with 17 additions and 2423 deletions

245
README.md
View File

@ -1,250 +1,28 @@
# Crosstool-NG
[![Build Status][travis-status]][travis]
[![Throughput Graph](https://graphs.waffle.io/crosstool-ng/crosstool-ng/throughput.svg)](https://waffle.io/crosstool-ng/crosstool-ng/metrics/throughput)
[![Stories in Ready](https://badge.waffle.io/crosstool-ng/crosstool-ng.png?label=ready&title=Ready)](https://waffle.io/crosstool-ng/crosstool-ng) [![Stories in Waiting For Response](https://badge.waffle.io/crosstool-ng/crosstool-ng.png?label=waiting%20for%20response&title=Waiting%20For%20Response)](https://waffle.io/crosstool-ng/crosstool-ng) [![Stories in In Progress](https://badge.waffle.io/crosstool-ng/crosstool-ng.png?label=in%20progress&title=In%20Progress)](https://waffle.io/crosstool-ng/crosstool-ng)
## Introduction
crosstool-NG aims at building toolchains. Toolchains are an essential component in a software development project. It will compile, assemble and link the code that is being developed. Some pieces of the toolchain will eventually end up in the resulting binary/ies: static libraries are but an example.
Crosstool-NG aims at building toolchains. Toolchains are an essential component in a software development project. It will compile, assemble and link the code that is being developed. Some pieces of the toolchain will eventually end up in the resulting binaries: static libraries are but an example.
Toolchains are made of different pieces of software, each being quite complex and requiring specially crafted options to build and work seamlessly. This is usually not that easy, even in the not-so-trivial case of native toolchains. The work reaches a higher degree of complexity when it comes to cross-compilation, where it can become quite a nightmare… mostly envolving host polution and linking issues.
Some cross-toolchains exist on the internet, and can be used for general development, but they have a number of limitations:
- They can be general purpose, in that they are configured for the majority - in that it is optimized for a specific target - and may be configured for a specific target when you might have multiple and want consistent configuration across the toolchains you use.
- They can be prepared for a specific target and thus are not easy to use, nor optimised for, or even supporting your target,
- They often are using aging components (compiler, C library, etc…) not supporting special features of your shiny new processor; On the other side, these toolchains offer some advantages:
- They are ready to use and quite easy to install and setup,
- They are proven if used by a wide community.
But once you want to get all the juice out of your specific hardware, you will want to build your own toolchain. This is where crosstool-NG comes into play.
There are also a number of tools that build toolchains for specific needs, which are not really scalable. Examples are:
- [buildroot](https://buildroot.org/) whose main purpose is to build complete root file systems, hence the name. But once you have your toolchain with buildroot, part of it is installed in the root-to-be, so if you want to build a whole new root, you either have to save the existing one as a template and restore it later, or restart again from scratch. This is not convenient,
- ptxdist[[en](http://www.pengutronix.de/software/ptxdist/index_en.html)][[de](http://www.pengutronix.de/software/ptxdist/index_de.html)], whose purpose is very similar to buildroot,
other projects (openembedded for example), which is again used to build complete root file systems.
crosstool-NG is really targetted at building toolchains, and only toolchains. It is then up to you to use it the way you want.
With crosstool-NG, you can learn precisely how each component is configured and built, so you can finely tweak the build steps should you need it.
crosstool-NG can build from generic, general purpose toolchains, to very specific and dedicated toolchains. Simply fill in specific values in the adequate options.
Of course, it doesn't prevent you from doing your home work first. You have to know with some degree of exactitude what your target is (architecture, processor variant), what it will be used for (embedded, desktop, realtime), what degree of confidence you have with each component (stability, maintainability), and so on…
## Features
It's quite difficult to list all possible features available in crosstool-NG. Here is a list of those I find important:
* kernel-like menuconfig configuration interface
* widespread, well-known interface
* easy, yet powerful configuration
* growing number of supported architectures
* see the status table for the current list
* support for alternative components in the toolchain
* uClibc-, glibc-, newlib-, musl-libc-based toolchain supported right now!
* others easy to implement
* different target OS supported
* Linux
* bare metal
* patch repository for those versions needing patching
* patches for many versions of the toolchain components
* support for custom local patch repository
* different threading models (depending on target)
* NPTL
* linuxthreads
* support for both soft- and hard-float toolchains
* support for multlib toolchains (experimental for now)
* debug facilities
* native and cross gdb, gdbserver
* debugging libraries: duma
* debugging tools: ltrace, strace
* restart a build at any step
* sample configurations repository usable as starting point for your own toolchain
* see the status table for the current list
## Download and usage
You can:
- either get released versions and fixes there: /download/crosstool-ng/
- or check-out the [development stuff](#using-the-latest-development-stuff), or browse the code on-line, from the git repos at:
- [https://github.com/crosstool-ng/crosstool-ng](https://github.com/crosstool-ng/crosstool-ng) (main development site)
- crosstool-ng [Browse](http://crosstool-ng.org/git/crosstool-ng/) [GIT](git://crosstool-ng.org/crosstool-ng) [HTTP](http://crosstool-ng.org/git/crosstool-ng) (OSUOSL mirror)
### Using a released version
If you decide to use a released version (replace VERSION with the actual version you choose; the latest version is listed at the top of this page):
```
wget http://crosstool-ng.org/download/crosstool-ng/crosstool-ng-VERSION.tar.bz2
```
Starting with 1.21.0, releases are signed with Bryan Hundven's pgp key
The fingerprint is:
```
561E D9B6 2095 88ED 23C6 8329 CAD7 C8FC 35B8 71D1
```
The public key is found on: http://pgp.surfnet.nl/
```
35B871D1
```
To validate the release tarball run you need to import the key from the keyserver and download the signature of the tarball:
```
gpg --recv-keys 35B871D1
wget http://crosstool-ng.org/download/crosstool-ng/crosstool-ng-VERSION.tar.bz2.sig
```
Now, with the tarball and signature in the same directory, you can verify the tarball:
```gpg --verify crosstool-ng-VERSION.tar.bz2.sig```
Now you can unpack and install crosstool-NG:
```
tar xjf crosstool-ng-VERSION.tar.bz2
cd crosstool-ng-VERSION
./configure --prefix=/some/place
make
make install
export PATH="${PATH}:/some/place/bin"
```
Then, you are ready to use crosstool-NG.
- create a place to work in, then list the existing samples (pre-configured toolchains that are known to build and work) to see if one can fit your actual needs. Sample names are 4-part tuples, such as arm-unknown-linux-gnueabi. In the following, we'll use that as a sample name; adapt to your needs:
```
mkdir /a/directory/to/build/your/toolchain
cd /a/directory/to/build/your/toolchain
ct-ng help
ct-ng list-samples
ct-ng show-arm-unknown-linux-gnueabi
```
- once you know what sample to use, configure ct-ng to use it:
```
ct-ng arm-unknown-linux-gnueabi
```
- samples are configured to install in `${HOME}/x-tools/arm-unknown-linux-gnueabi` by default. This should be OK for a first time user, so you can now build your toolchain:
```
ct-ng build
```
- finally, you can set access to your toolchain, and call your new cross-compiler with :
```
export PATH="${PATH}:${HOME}/x-tools/arm-unknown-linux-gnueabi/bin"
arm-unknown-linux-gnueabi-gcc
```
Of course, replace arm-unknown-linux-gnueabi with the actual sample name you choose! ;-)
If no sample really fits your needs:
1. choose the one closest to what you want (see above), and start building it (see above, too)
- this ensures sure it is working for your machine, before trying to do more advanced tests
2. fine-tune the configuration, and re-run the build, with:
```
ct-ng menuconfig
ct-ng build
```
Then, if all goes well, your toolchain will be available and you can set access to it as shown above.
See contacts, below for how to ask for further help.
Refer to [documentation at crosstool-NG website](http://crosstool-ng.github.io/docs/) for more information on how to configure, install and use crosstool-NG.
##########################################
**Note 1:** If you elect to build a uClibc-based toolchain, you will have to prepare a config file for uClibc with <= crosstool-NG-1.21.0. In >= crosstool-NG-1.22.0 you only need to prepare a config file for uClibc(or uClibc-ng) if you really need a custom config for uClibc.
**Note 2:** If you call `ct-ng --help` you will get help for `make(2)`. This is because ct-ng is in fact a `make(2)` script. There is no clean workaround for this.
## Using the latest development stuff
I usually setup my development environment like this:
```
mkdir $HOME/build
cd $HOME/build
git clone https://github.com/crosstool-ng/crosstool-ng
cd crosstool-ng
./bootstrap
./configure --prefix=$HOME/.local
make
make install
```
Now make sure `$HOME/.local/bin` is in your PATH (Newer Linux distributions [fc23, ubuntu-16.04, debian stretch] should have this in the PATH already):
```
echo -ne "\n\nif [ -d \"$HOME/.local/bin\" ]; then\n PATH=\"$HOME/.local/bin:$PATH\"\nfi" >> ~/.profile
```
Then source your .profile to add the PATH to your current environment, or logout and log back in:
```
source ~/.profile
```
Now I create a directory to do my toolchain builds in:
```
mkdir $HOME/tc/
cd $HOME/tc/
```
Say we want to build armv6-rpi-linux-gnueabi:
```
mkdir armv6-rpi-linux-gnueabi
cd armv6-rpi-linux-gnueabi
ct-ng armv6-rpi-linux-gnueabi
```
Now build the sample:
```
ct-ng build
```
## Repository layout
|URL | Purpose |
|---|---|
| http://crosstool-ng.org/git | All available development repositories |
| http://crosstool-ng.org/git/crosstool-ng/ | Mirror of the development repository |
| https://github.com/crosstool-ng/crosstool-ng/ | Main development repository |
To clone the main repository:
To clone the repository:
```
git clone https://github.com/crosstool-ng/crosstool-ng
```
You can also download from our mirror at crosstool-ng.org:
```
git clone git://crosstool-ng.org/crosstool-ng
```
Alternatively, if you are sitting behind a restrictive proxy that does not let the git protocol through, you can clone with:
```
git clone http://crosstool-ng.org/git/crosstool-ng
```
#### Old repositories
These are the old Mercurial repositories. They are now read-only: [http://crosstool-ng.org/hg/](http://crosstool-ng.org/hg/)
@ -342,7 +120,9 @@ git push origin fix_comment_typo
At this point the PR will be updated to have the latest commit to that branch, and can be subsequently reviewed.
2. Interactively rebase the offending commit(s) to fix the code review. This option is slightly annoying on Github, as the comments are stored with the commits, and disapear when new commits replace the old commits. I do this when I don't care about the previous comments in the code review and need to do a total rewrite of my work. This comes with other issues, like your topic branch not being up-to-date with master. So I use this work-flow:
2. Interactively rebase the offending commit(s) to fix the code review. This option is slightly annoying on Github, as the comments are stored with the commits, and are hidden when new commits replace the old commits. They used to disappear completely; now Github shows a grey 'View outdated' link next to the old commits.
This recipe also comes handy with other issues, like your topic branch not being up-to-date with master:
```
git fetch --all
@ -372,12 +152,19 @@ You can find the [list of pending patches](http://patchwork.ozlabs.org/project/c
You can find *all* of this and more at [crosstool-ng.org](http://crosstool-ng.org/)
Report issues at [the project site on GitHub](https://github.com/crosstool-ng/crosstool-ng).
We are also available on IRC: irc.freenode.net #crosstool-ng
We have a [mailing list](mailto:crossgcc@sourceware.org). Archive and subscription info can be found here: [https://sourceware.org/ml/crossgcc/](https://sourceware.org/ml/crossgcc/)
We also have a [mailing list](mailto:crossgcc@sourceware.org), when you can get ahold of anyone on IRC. Archive and subscription info can be found here: [https://sourceware.org/ml/crossgcc/](https://sourceware.org/ml/crossgcc/)
We are also available on IRC: irc.freenode.net #crosstool-ng.
Aloha! :-)
[![Build Status][travis-status]][travis]
[![Throughput Graph](https://graphs.waffle.io/crosstool-ng/crosstool-ng/throughput.svg)](https://waffle.io/crosstool-ng/crosstool-ng/metrics/throughput)
[![Stories in Ready](https://badge.waffle.io/crosstool-ng/crosstool-ng.png?label=ready&title=Ready)](https://waffle.io/crosstool-ng/crosstool-ng) [![Stories in Waiting For Response](https://badge.waffle.io/crosstool-ng/crosstool-ng.png?label=waiting%20for%20response&title=Waiting%20For%20Response)](https://waffle.io/crosstool-ng/crosstool-ng) [![Stories in In Progress](https://badge.waffle.io/crosstool-ng/crosstool-ng.png?label=in%20progress&title=In%20Progress)](https://waffle.io/crosstool-ng/crosstool-ng)
[travis]: https://travis-ci.org/crosstool-ng/crosstool-ng
[travis-status]: https://travis-ci.org/crosstool-ng/crosstool-ng.svg

View File

@ -1,71 +0,0 @@
File.........: 0 - Table of content.txt
Copyright....: (C) 2010 Yann E. MORIN <yann.morin.1998@free.fr>
License......: Creative Commons Attribution Share Alike (CC-by-sa), v2.5
Table Of Content /
_________________/
1- Introduction
- History
- Referring to crosstool-NG
2- Installing crosstool-NG
- Install method
- The hacker's way
- Preparing for packaging
- Shell completion
- Contributed code
3- Configuring a toolchain
- Interesting config options
- Re-building an existing toolchain
4- Building the toolchain
- Stopping and restarting a build
- Testing all toolchains at once
- Overriding the number of // jobs
- Note on // jobs
- Tools wrapper
5- Using the toolchain
- The 'populate' script
6- Toolchain types
- Seemingly-native toolchains
7- Contributing
- Sending a bug report
- Sending patches
8- Internals
- Makefile front-end
- Kconfig parser
- Architecture-specific
- Adding a new version of a component
- Build scripts
9 - How is a toolchain constructed?
- I want a cross-compiler! What is this toolchain you're speaking about?
- So, what are those components in a toolchain?
- And now, how do all these components chained together?
- So the list is complete. But why does crosstool-NG have more steps?
A- Credits
B- Known issues
- gcc is not found, although I *do* have gcc installed
- The extract and/or path steps fail under Cygwin
- uClibc fails to build under Cygwin
- On 64-bit build systems, the glibc build
fails for 64-bit targets, because it can not find libgcc
- libtool.m4: error: problem compiling FC test program
- unable to detect the exception model
- configure: error: forced unwind support is required
- glibc start files and headers fail with: [/usr/include/limits.h] Error 1
C- Misc. tutorials
- Using crosstool-NG on FreeBSD (and other *BSD)
- Using crosstool-NG on MacOS-X
- Using Mercurial to hack crosstool-NG

View File

@ -1,111 +0,0 @@
File.........: 1 - Introduction.txt
Copyright....: (C) 2010 Yann E. MORIN <yann.morin.1998@free.fr>
License......: Creative Commons Attribution Share Alike (CC-by-sa), v2.5
Introduction /
_____________/
crosstool-NG aims at building toolchains. Toolchains are an essential component
in a software development project. It will compile, assemble and link the code
that is being developed. Some pieces of the toolchain will eventually end up
in the resulting binary/ies: static libraries are but an example.
So, a toolchain is a very sensitive piece of software, as any bug in one of the
components, or a poorly configured component, can lead to execution problems,
ranging from poor performance, to applications ending unexpectedly, to
mis-behaving software (which more than often is hard to detect), to hardware
damage, or even to human risks (which is more than regrettable).
Toolchains are made of different piece of software, each being quite complex
and requiring specially crafted options to build and work seamlessly. This
is usually not that easy, even in the not-so-trivial case of native toolchains.
The work reaches a higher degree of complexity when it comes to cross-
compilation, where it can become quite a nightmare...
Some cross-toolchains exist on the internet, and can be used for general
development, but they have a number of limitations:
- they can be general purpose, in that they are configured for the majority:
no optimisation for your specific target,
- they can be prepared for a specific target and thus are not easy to use,
nor optimised for, or even supporting your target,
- they often are using aging components (compiler, C library, etc...) not
supporting special features of your shiny new processor;
On the other side, these toolchain offer some advantages:
- they are ready to use and quite easy to install and setup,
- they are proven if used by a wide community.
But once you want to get all the juice out of your specific hardware, you will
want to build your own toolchain. This is where crosstool-NG comes into play.
There are also a number of tools that build toolchains for specific needs,
which are not really scalable. Examples are:
- buildroot (buildroot.uclibc.org) whose main purpose is to build root file
systems, hence the name. But once you have your toolchain with buildroot,
part of it is installed in the root-to-be, so if you want to build a whole
new root, you either have to save the existing one as a template and
restore it later, or restart again from scratch. This is not convenient,
- ptxdist (www.pengutronix.de/software/ptxdist), whose purpose is very
similar to buildroot,
- other projects (openembedded.org for example), which are again used to
build root file systems.
crosstool-NG is really targeted at building toolchains, and only toolchains.
It is then up to you to use it the way you want.
History |
--------+
crosstool was first 'conceived' by Dan Kegel, who offered it to the community
as a set of scripts, a repository of patches, and some pre-configured, general
purpose setup files to be used to configure crosstool. This is available at
http://www.kegel.com/crosstool, and the subversion repository is hosted on
google at http://code.google.com/p/crosstool/.
Yann E. MORIN once managed to add support for uClibc-based toolchains, but it
did not make into mainline, mostly because Yann didn't have time to port the
patch forward to the new versions, due in part to the big effort it was taking.
So Yann decided to clean up crosstool in the state it was, re-order the things
in place, add appropriate support for what Yann needed, that is uClibc support
and a menu-driven configuration, named the new implementation crosstool-NG,
(standing for crosstool Next Generation, as many other community projects do,
and as a wink at the TV series "Star Trek: The Next Generation" ;-) ) and
made it available to the community, in case it was of interest to any one.
In late 2014, Yann became very busy with buildroot and other projects, and so
Bryan Hundven opted to become the new maintainer for crosstool-NG.
Referring to crosstool-NG |
--------------------------+
The long name of the project is crosstool-NG:
* no leading uppercase (except as first word in a sentence)
* crosstool and NG separated with a hyphen (dash)
* NG in uppercase
Crosstool-NG can also be referred to by its short name CT-NG:
* all in uppercase
* CT and NG separated with a hyphen (dash)
The long name is preferred over the short name, except in mail subjects, where
the short name is a better fit.
When referring to a specific version of crosstool-NG, append the version number
either as:
* crosstool-NG X.Y.Z
- the long name, a space, and the version string
* crosstool-ng-X.Y.Z
- the long name in lowercase, a hyphen (dash), and the version string
- this is used to name the release tarballs
* crosstool-ng-X.Y.Z+hg_id
- the long name in lowercase, a hyphen, the version string, and the Hg id
(as returned by: ct-ng version)
- this is used to differentiate between releases and snapshots
The frontend to crosstool-NG is the command ct-ng:
* all in lowercase
* ct and ng separated by a hyphen (dash)

View File

@ -1,99 +0,0 @@
File.........: 2 - Installing crosstool-NG.txt
Copyright....: (C) 2010 Yann E. MORIN <yann.morin.1998@free.fr>
License......: Creative Commons Attribution Share Alike (CC-by-sa), v2.5
Installing crosstool-NG /
________________________/
There are two ways you can use crosstool-NG:
- build and install it, then get rid of the sources like you'd do for most
programs,
- or only build it and run from the source directory.
The former should be used if you got crosstool-NG from a packaged tarball, see
"Install method", below, while the latter is most useful for developers that
use a clone of the repository, and want to submit patches, see "The Hacker's
way", below.
Install method |
---------------+
If you go for the install, then you just follow the classical, but yet easy
./configure way:
./configure --prefix=/some/place
make
make install
export PATH="${PATH}:/some/place/bin"
You can then get rid of crosstool-NG source. Next create a directory to serve
as a working place, cd in there and run:
mkdir work-dir
cd work-dir
ct-ng help
See below for complete usage.
The Hacker's way |
-----------------+
If you go the hacker's way, then the usage is a bit different, although very
simple. First, you need to generate the ./configure script from its autoconf
template:
./bootstrap
Then, you run ./configure for local execution of crosstool-NG:
./configure --enable-local
make
Now, *do not* remove crosstool-NG sources. They are needed to run crosstool-NG!
Stay in the directory holding the sources, and run:
./ct-ng help
See below for complete usage.
Now, provided you used a clone of the repository, you can send me your changes.
See the section titled CONTRIBUTING, below, for how to submit changes.
Preparing for packaging |
------------------------+
If you plan on packaging crosstool-NG, you surely don't want to install it
in your root file system. The install procedure of crosstool-NG honors the
DESTDIR variable:
./configure --prefix=/usr
make
make DESTDIR=/packaging/place install
Shell completion |
-----------------+
crosstool-NG comes with a shell script fragment that defines bash-compatible
completion. That shell fragment is currently not installed automatically, but
this is planned.
To install the shell script fragment, you have two options:
- install system-wide, most probably by copying ct-ng.comp into
/etc/bash_completion.d/
- install for a single user, by copying ct-ng.comp into ${HOME}/ and
sourcing this file from your ${HOME}/.bashrc
Contributed code |
-----------------+
Some people contributed code that couldn't get merged for various reasons. This
code is available as lzma-compressed patches, in the contrib/ sub-directory.
These patches are to be applied to the source of crosstool-NG, prior to
installing, using something like the following:
lzcat contrib/foobar.patch.lzma |patch -p1
There is no guarantee that a particular contribution applies to the current
version of crosstool-ng, or that it will work at all. Use contributions at
your own risk.

View File

@ -1,117 +0,0 @@
File.........: 3 - Configuring a toolchain.txt
Copyright....: (C) 2010 Yann E. MORIN <yann.morin.1998@free.fr>
License......: Creative Commons Attribution Share Alike (CC-by-sa), v2.5
Configuring crosstool-NG /
_________________________/
crosstool-NG is configured with a configurator presenting a menu-structured set
of options. These options let you specify the way you want your toolchain
built, where you want it installed, what architecture and specific processor it
will support, the version of the components you want to use, etc... The
value for those options are then stored in a configuration file.
The configurator works the same way you configure your Linux kernel. It is
assumed you know how to handle this.
To enter the menu, type:
ct-ng menuconfig
Almost every config item has a help entry. Read them carefully.
String and number options can refer to environment variables. In such a case,
you must use the shell syntax: ${VAR}. You shall neither single- nor double-
quote the string/number options.
There are three environment variables that are computed by crosstool-NG, and
that you can use:
CT_TARGET:
It represents the target tuple you are building for. You can use it for
example in the installation/prefix directory, such as:
/opt/x-tools/${CT_TARGET}
CT_TOP_DIR:
The top directory where crosstool-NG is running. You shouldn't need it in
most cases. There is one case where you may need it: if you have local
patches and you store them in your running directory, you can refer to them
by using CT_TOP_DIR, such as:
${CT_TOP_DIR}/patches.myproject
CT_VERSION:
The version of crosstool-NG you are using. Not much use for you, but it's
there if you need it.
Interesting config options |
---------------------------+
CT_LOCAL_TARBALLS_DIR:
If you already have some tarballs in a directory, enter it here. That will
speed up the retrieving phase, where crosstool-NG would otherwise download
those tarballs.
CT_PREFIX_DIR:
This is where the toolchain will be installed in (and for now, where it
will run from). Common use is to add the target tuple in the directory
path, such as (see above):
/opt/x-tools/${CT_TARGET}
CT_TARGET_VENDOR:
An identifier for your toolchain, will take place in the vendor part of the
target tuple. It shall *not* contain spaces or dashes. Usually, keep it
to a one-word string, or use underscores to separate words if you need.
Avoid dots, commas, and special characters.
CT_TARGET_ALIAS:
An alias for the toolchain. It will be used as a prefix to the toolchain
tools. For example, you will have ${CT_TARGET_ALIAS}-gcc
Also, if you think you don't see enough versions, you can try to enable one of
those:
CT_OBSOLETE:
Show obsolete versions or tools. Most of the time, you don't want to base
your toolchain on too old a version (of gcc, for example). But at times, it
can come handy to use such an old version for regression tests. Those old
versions are hidden behind CT_OBSOLETE. Those versions (or features) are so
marked because maintaining support for those in crosstool-NG would be too
costly, time-wise, and time is dear.
CT_EXPERIMENTAL:
Show experimental versions or tools. Again, you might not want to base your
toolchain on too recent tools (eg. gcc) for production. But if you need a
feature present only in a recent version, or a new tool, you can find them
hidden behind CT_EXPERIMENTAL. Those versions (or features) did not (yet)
receive thorough testing in crosstool-NG, and/or are not mature enough to
be blindly trusted.
Re-building an existing toolchain |
----------------------------------+
If you have an existing toolchain, you can re-use the options used to build it
to create a new toolchain. That needs a very little bit of effort on your side
but is quite easy. The options to build a toolchain are saved with the
toolchain, and you can retrieve this configuration by running:
${CT_TARGET}-ct-ng.config
An alternate method is to extract the configuration from a build.log file.
This will be necessary if your toolchain was build with crosstool-NG prior
to 1.4.0, but can be used with build.log files from any version:
ct-ng extractconfig <build.log >.config
Or, if your build.log file is compressed (most probably!):
bzcat build.log.bz2 |ct-ng extractconfig >.config
The above commands will dump the configuration to stdout, so to rebuild a
toolchain with this configuration, just redirect the output to the
.config file:
${CT_TARGET}-ct-ng.config >.config
ct-ng oldconfig
Then, you can review and change the configuration by running:
ct-ng menuconfig

View File

@ -1,145 +0,0 @@
File.........: 4 - Building the toolchain.txt
Copyright....: (C) 2010 Yann E. MORIN <yann.morin.1998@free.fr>
License......: Creative Commons Attribution Share Alike (CC-by-sa), v2.5
Building the toolchain /
_______________________/
To build the toolchain, simply type:
ct-ng build
This will use the above configuration to retrieve, extract and patch the
components, build, install and eventually test your newly built toolchain.
You are then free to add the toolchain /bin directory in your PATH to use
it at will.
In any case, you can get some terse help. Just type:
ct-ng help
or:
man 1 ct-ng
Stopping and restarting a build |
--------------------------------+
If you want to stop the build after a step you are debugging, you can pass the
variable STOP to make:
ct-ng build STOP=some_step
Conversely, if you want to restart a build at a specific step you are
debugging, you can pass the RESTART variable to make:
ct-ng build RESTART=some_step
Alternatively, you can call make with the name of a step to just do that step:
ct-ng libc_headers
is equivalent to:
ct-ng build RESTART=libc_headers STOP=libc_headers
The shortcuts +step_name and step_name+ allow to respectively stop or restart
at that step. Thus:
ct-ng +libc_headers and: ct-ng libc_headers+
are equivalent to:
ct-ng build STOP=libc_headers and: ct-ng build RESTART=libc_headers
To obtain the list of acceptable steps, please call:
ct-ng list-steps
Note that in order to restart a build, you'll have to say 'Y' to the config
option CT_DEBUG_CT_SAVE_STEPS, and that the previous build effectively went
that far.
Building all toolchains at once |
--------------------------------+
You can build all samples; simply call:
ct-ng build-all
Overriding the number of // jobs |
---------------------------------+
If you want to override the number of jobs to run in // (the -j option to
make), you can either re-enter the menuconfig, or simply add it on the command
line, as such:
ct-ng build.4
which tells crosstool-NG to override the number of // jobs to 4.
You can see the actions that support overriding the number of // jobs in
the help menu. Those are the ones with [.#] after them (eg. build[.#] or
build-all[.#], and so on...).
Note on // jobs |
----------------+
The crosstool-NG script 'ct-ng' is a Makefile-script. It does *not* execute
in parallel (there is not much to gain). When speaking of // jobs, we are
refering to the number of // jobs when making the *components*. That is, we
speak of the number of // jobs used to build gcc, glibc, and so on...
Tools wrapper |
--------------+
Starting with gcc-4.3 come two new dependencies: GMP and MPFR. With gcc-4.4,
come three new ones: PPL, CLooG/ppl and MPC. With gcc-4.5 again comes a new
dependency on libelf. These are libraries that enable advanced features to
gcc. Additionally, some of those libraries can be used by binutils and gdb.
Unfortunately, not all systems on which crosstool-NG runs have all of those
libraries. And for those that do, the versions of those libraries may be
older than the version required by gcc (and binutils and gdb). To date,
Debian stable (aka Lenny) is lagging behind on some, and is missing the
others. With >= gcc-4.8, we drop PPL and CLooG/PPL, and switch to ISL to
replace PPL, and use the upstream version of CLooG instead of CLooG/PPL
which was a fork of CLooG that provided PPL backend support, that was under-
maintained. See: https://gcc.gnu.org/wiki/Graphite-4.8
This is why crosstool-NG builds its own set of libraries as part of the
toolchain.
The companion libraries can be built either as static libraries, or as shared
libraries. The default is to build static libraries, and is the safe way.
If you decide to use static companion libraries, then you can stop reading
this section.
But if you prefer to have shared libraries, then read on...
Building shared companion libraries poses no problem at build time, as
crosstool-NG correctly points gcc (and binutils and gdb) to the correct
place where our own version of the libraries are installed. But it poses
a problem when gcc et al. are run: the place where the libraries are is most
probably not known to the host dynamic linker. Still worse, if the host system
has its own versions, then ld.so would load the wrong libraries!
So we have to force the dynamic linker to load the correct version. We do this
by using the LD_LIBRARY_PATH variable, that informs the dynamic linker where
to look for shared libraries prior to searching its standard places. But we
can't impose that burden on all the system (because it'd be a nightmare to
configure, and because two toolchains on the same system may use different
versions of the libraries); so we have to do it on a per-toolchain basis.
So we rename all binaries of the toolchain (by adding a dot '.' as their first
character), and add a small program, the so-called "tools wrapper", that
correctly sets LD_LIBRARY_PATH prior to running the real tool.
First, the wrapper was written as a POSIX-compliant shell script. That shell
script is very simple, if not trivial, and works great. The only drawback is
that it does not work on host systems that lack a shell, for example the
MingW32 environment. To solve the issue, the wrapper has been re-written in C,
and compiled at build time. This C wrapper is much more complex than the shell
script, and although it seems to be working, it's been only lightly tested.
Some of the expected short-comings with this C wrapper are;
- multi-byte file names may not be handled correctly
- it's really big for what it does
So, the default wrapper installed with your toolchain is the shell script.
If you know that your system is missing a shell, then you shall use the C
wrapper (and report back whether it works, or does not work, for you).
A final word on the subject: do not build shared libraries. Build them
static, and you'll be safe.

View File

@ -1,231 +0,0 @@
File.........: 5 - Using the toolchain.txt
Copyright....: (C) 2010 Yann E. MORIN <yann.morin.1998@free.fr>
License......: Creative Commons Attribution Share Alike (CC-by-sa), v2.5
Using the toolchain /
____________________/
Using the toolchain is as simple as adding the toolchain's bin directory in
your PATH, such as:
export PATH="${PATH}:/your/toolchain/path/bin"
and then using the '--host' tuple to tell the build systems to use your
toolchain (if the software package uses the autotools system you should
also pass --build, for completeness):
./configure --host=your-host-tuple --build=your-build-tuple
or
make CC=your-host-tuple-gcc
or
make CROSS_COMPILE=your-host-tuple-
and so on...
(Note: in the above example, 'host' refers to the host of your program,
not the host of the toolchain; and 'build' refers to the machine where
you build your program, that is the host of the toolchain.)
Assembling a root filesystem /
_____________________________/
Assembling a root filesystem for a target device requires the successive
building of a set of software packages for the target architecture. Building
a package potentially requires artifacts which were generated as part of an
earlier build. Note that not all artifacts which are installed as part of a
package are desirable on a target's root filesystem (e.g. man/info files,
include files, etc.). Therefore we must distinguish between a 'staging'
directory and a 'rootfs' directory.
A 'staging' directory is a location into which we install all the build
artifacts. We can then point future builds to this location so they can find
the appropriate header and library files. A 'rootfs' directory is a location
into which we place only the files we want to have on our target.
There are four schools of thought here:
1) Install directly into the sysroot of the toolchain.
By default (i.e. if you don't pass any arguments to the tools which
would change this behaviour) the toolchain that is built by
crosstool-NG will only look in its toolchain directories for system
header and library files:
#include "..." search starts here:
#include <...> search starts here:
<ct-ng install path>/lib/gcc/<host tuple>/4.5.2/include
<ct-ng install path>/lib/gcc/<host tuple>/4.5.2/include-fixed
<ct-ng install path>/lib/gcc/<host tuple>/4.5.2/../../../../<host tuple>/include
<ct-ng install path>/<host tuple>/sysroot/usr/include
In other words, the compiler will automagically find headers and
libraries without extra flags if they are installed under the
toolchain's sysroot directory.
However, this is bad because the toolchain gets poluted, and can
not be re-used.
$ ./configure --build=<build tuple> --host=<host tuple> \
--prefix=/usr --enable-foo-bar...
$ make
$ make DESTDIR=/<ct-ng install path>/<host tuple>/sysroot install
2) Copy the toolchain's sysroot to the 'staging' area.
If you start off by copying the toolchain's sysroot directory to your
staging area, you can simply proceed to install all your packages'
artifacts to the same staging area. You then only need to specify a
'--sysroot=<staging area>' option to the compiler of any subsequent
builds and all your required header and library files will be found/used.
This is a viable option, but requires the user to always specify CFLAGS
in order to include --sysroot=<staging area>, or requires the use of a
wrapper to a few select tools (gcc, ld...) to pass this flag.
Instead of polluting the toolchain's sysroot you are copying its contents
to a new location and polluting the contents in that new location. By
specifying the --sysroot option you're effectively abandoning the default
sysroot in favour of your own.
Incidentally this is what buildroot does using a wrapper, when using an
external toolchain.
$ cp -a $(<host tuple>-gcc --your-cflags-except-sysroot -print-sysroot) \
/path/to/staging
$ ./configure --build=<build tuple> --host=<host tuple> \
--prefix=/usr --enable-foo-bar... \
CC="<host tuple>-gcc --syroot=/path/to/staging" \
CXX="<host tuple>-g++ --sysroot=/path/to/staging" \
LD="<host tuple>-ld --sysroot=/path/to/staging" \
AND_SO_ON="tuple-andsoon --sysroot=/path/to/staging"
$ make
$ make DESTDIR=/path/to/staging install
3) Use separate staging and sysroot directories.
In this scenario you use a staging area to install programs, but you do
not pre-fill that staging area with the toolchain's sysroot. In this case
the compiler will find the system includes and libraries in its sysroot
area but you have to pass appropriate CPPFLAGS and LDFLAGS to tell it
where to find your headers and libraries from your staging area (or use
a wrapper).
$ ./configure --build=<build tuple> --host=<host tuple> \
--prefix=/usr --enable-foo-bar... \
CPPFLAGS="-I/path/to/staging/usr/include" \
LDFLAGS="-L/path/to/staging/lib -L/path/to/staging/usr/lib"
$ make
$ make DESTDIR=/path/to/staging install
4) A mix of 2) and 3), using carefully crafted union mounts.
The staging area is a union mount of:
- the sysroot as a read-only branch
- the real staging area as a read-write branch
This also requires passing --sysroot to point to the union mount, but has
other advantages, such as allowing per-package staging, and a few more
obscure pros. It also has its disadvantages, as it potentially requires
non-root users to create union mounts. Additionally, union mounts are not
yet mainstream in the Linux kernel, so it requires patching. There is a
FUSE-based unionfs implementation, but development is almost stalled,
and there are a few gotchas...
$ (good luck!)
It is strongly advised not to use the toolchain sysroot directory as an
install directory (i.e. option 1) for your programs/packages. If you do so,
you will not be able to use your toolchain for another project. It is even
strongly advised that your toolchain is chmod-ed to read-only once
successfully install, so that you don't go polluting your toolchain with
your programs'/packages' files. This can be achieved by selecting the
"Render the toolchain read-only" from crosstool-NG's "Paths and misc options"
configuration page.
Thus, when you build a program/package, install it in a separate, staging,
directory and let the cross-toolchain continue to use its own, pristine,
sysroot directory.
When you are done building and want to assemble your rootfs you could simply
take the full contents of your staging directory and use the 'populate'
script to add in the necessary files from the sysroot. However, the staging
area you have created will include lots of build artifacts that you won't
necessarily want/need on your target. For example: static libraries, header
files, linking helper files, man/info pages. You'll also need to add various
configuration files, scripts, and directories to the rootfs so it will boot.
Therefore you'll probably end up creating a separate rootfs directory which
you will populate from the staging area, necessary extras, and then use
crosstool-NG's populate script to add the necessary sysroot libraries.
The 'populate' script |
----------------------+
When your root directory is ready, it is still missing some important bits: the
toolchain's libraries. To populate your root directory with those libs, just
run:
your-target-tuple-populate -s /your/root -d /your/root-populated
This will copy /your/root into /your/root-populated, and put the needed and only
the needed libraries there. Thus you don't pollute /your/root with any cruft that
would no longer be needed should you have to remove stuff. /your/root always
contains only those things you install in it.
You can then use /your/root-populated to build up your file system image, a
tarball, or to NFS-mount it from your target, or whatever you need.
The populate script accepts the following options:
-s src_dir
Use 'src_dir' as the un-populated root directory.
-d dst_dir
Put the populated root directory in 'dst_dir'.
-l lib1 [...]
Always add specified libraries.
-L file
Always add libraries listed in 'file'.
-f
Remove 'dst_dir' if it previously existed; continue even if any library
specified with -l or -L is missing.
-v
Be verbose, and tell what's going on (you can see exactly where libs are
coming from).
-h
Print the help.
See 'your-target-tuple-populate -h' for more information on the options.
Here is how populate works:
1) performs some sanity checks:
- src_dir and dst_dir are specified
- src_dir exists
- unless forced, dst_dir does not exist
- src_dir != dst_dir
2) copy src_dir to dst_dir
3) add forced libraries to dst_dir
- build the list from -l and -L options
- get forced libraries from the sysroot (see below for heuristics)
- abort on the first missing library, unless -f is specified
4) add all missing libraries to dst_dir
- scan dst_dir for every ELF files that are 'executable' or
'shared object'
- list the "NEEDED Shared library" fields
- check if the library is already in dst_dir/lib or dst_dir/usr/lib
- if not, get the library from the sysroot
- if it's in sysroot/lib, copy it to dst_dir/lib
- if it's in sysroot/usr/lib, copy it to dst_dir/usr/lib
- in both cases, use the SONAME of the library to create the file
in dst_dir
- if it was not found in the sysroot, this is an error.

View File

@ -1,64 +0,0 @@
File.........: 6 - Toolchain types.txt
Copyright....: (C) 2010 Yann E. MORIN <yann.morin.1998@free.fr>
License......: Creative Commons Attribution Share Alike (CC-by-sa), v2.5
Toolchain types /
________________/
There are four kinds of toolchains you could encounter.
First off, you must understand the following: when it comes to compilers there
are up to four machines involved:
1) the machine configuring the toolchain components: the config machine
2) the machine building the toolchain components: the build machine
3) the machine running the toolchain: the host machine
4) the machine the toolchain is generating code for: the target machine
We can most of the time assume that the config machine and the build machine
are the same. Most of the time, this will be true. The only time it isn't
is if you're using distributed compilation (such as distcc). Let's forget
this for the sake of simplicity.
So we're left with three machines:
- build
- host
- target
Any toolchain will involve those three machines. You can be as pretty sure of
this as "2 and 2 are 4". Here is how they come into play:
1) build == host == target
This is a plain native toolchain, targeting the exact same machine as the
one it is built on, and running again on this exact same machine. You have
to build such a toolchain when you want to use an updated component, such
as a newer gcc for example.
crosstool-NG calls it "native".
2) build == host != target
This is a classic cross-toolchain, which is expected to be run on the same
machine it is compiled on, and generate code to run on a second machine,
the target.
crosstool-NG calls it "cross".
3) build != host == target
Such a toolchain is also a native toolchain, as it targets the same machine
as it runs on. But it is build on another machine. You want such a
toolchain when porting to a new architecture, or if the build machine is
much faster than the host machine.
crosstool-NG calls it "cross-native".
4) build != host != target
This one is called a canadian-toolchain (*), and is tricky. The three
machines in play are different. You might want such a toolchain if you
have a fast build machine, but the users will use it on another machine,
and will produce code to run on a third machine.
crosstool-NG calls it "canadian".
crosstool-NG can build all these kinds of toolchains (or is aiming at it,
anyway!)
(*) The term Canadian Cross came about because at the time that these issues
were all being hashed out, Canada had three national political parties.
http://en.wikipedia.org/wiki/Cross_compiler

View File

@ -1,63 +0,0 @@
File.........: 7 - Contributing to crosstool-NG.txt
Copyright....: (C) 2010 Yann E. MORIN <yann.morin.1998@free.fr>
License......: Creative Commons Attribution Share Alike (CC-by-sa), v2.5
Contributing to crosstool-NG /
_____________________________/
Sending a bug report |
---------------------+
If you need to send a bug report, please send a mail with subject
prefixed with "[CT_NG]" with to following destinations:
TO: yann.morin.1998 (at) free.fr
CC: crossgcc (at) sourceware.org
Sending patches |
----------------+
If you want to enhance crosstool-NG, there's a to-do list in the TODO file.
When updating a package, please include the category and component in the
start of the description. For example:
cc/gcc: update to the Linaro 2011.09 release
Here is the (mostly-complete) list of categories and components:
Categories | Components
------------+-------------------------------------------------------
arch | alpha, arm, mips, powerpc...
cc | gcc
binutils | binutils, elf2flt, sstrip
libc | uClibc, glibc, newlib, mingw, none
kernel | linux, mingw32, bare-metal
debug | duma, gdb, ltrace, strace
complibs | gmp, mpfr, isl, cloog, mpc, libelf
comptools | make, m4, autoconf, automake, libtool
------------+-------------------------------------------------------
| The following categories have no component-part:
samples | when adding/updating/removing a sample
kconfig | for stuff in the kconfig/ dir
docs | for changes to the documentation
configure | for changes to ./configure and/or Makefile.in
config | for stuff in config/ not covered above
scripts | for stuff in scripts/ not covered above
Patches should come with the appropriate SoB line. A SoB line is typically
something like:
Signed-off-by: John DOE <john.doe@somewhere.net>
The SoB line is clearly described in Documentation/SubmittingPatches , section
12, of your favourite Linux kernel source tree.
You can also add any of the following lines if applicable:
Acked-by:
Tested-by:
Reviewed-by:
For larger or more frequent contributions, mercurial should be used.
There is a nice, complete and step-by-step tutorial in section 'C'.

View File

@ -1,293 +0,0 @@
File.........: 8 - Internals.txt
Copyright....: (C) 2010 Yann E. MORIN <yann.morin.1998@free.fr>
License......: Creative Commons Attribution Share Alike (CC-by-sa), v2.5
Internals /
__________/
Internally, crosstool-NG is script-based. To ease usage, the frontend is
Makefile-based.
Makefile front-end |
-------------------+
The entry point to crosstool-NG is the Makefile script "ct-ng". Calling this
script with an action will act exactly as if the Makefile was in the current
working directory and make was called with the action as rule. Thus:
ct-ng menuconfig
is equivalent to having the Makefile in CWD, and calling:
make menuconfig
Having ct-ng as it is avoids copying the Makefile everywhere, and acts as a
traditional command.
ct-ng loads sub- Makefiles from the library directory $(CT_LIB_DIR), as set up
at configuration time with ./configure.
ct-ng also searches for config files, sub-tools, samples, scripts and patches in
that library directory.
Because of a stupid make behavior/bug I was unable to track down, implicit make
rules are disabled: installing with --local would trigger those rules, and mconf
was unbuildable.
Kconfig parser |
---------------+
The kconfig language is a hacked version, vampirised from the Linux kernel
(http://www.kernel.org/), and (heavily) adapted to my needs.
The list of the most notable changes (at least the ones I remember) follows:
- the CONFIG_ prefix has been replaced with CT_
- a leading | in prompts is skipped, and subsequent leading spaces are not
trimmed; otherwise leading spaces are silently trimmed
- removed the warning about undefined environment variable
The kconfig parsers (conf and mconf) are not installed pre-built, but as
source files. Thus you can have the directory where crosstool-NG is installed,
exported (via NFS or whatever) and have clients with different architectures
use the same crosstool-NG installation, and most notably, the same set of
patches.
Architecture-specific |
----------------------+
Note: this chapter is not really well written, and might thus be a little bit
complex to understand. To get a better grasp of what an architecture is, the
reader is kindly encouraged to look at the "arch/" sub-directory, and to the
existing architectures to see how things are laid out.
An architecture is defined by:
- a human-readable name, in lower case letters, with numbers as appropriate.
The underscore is allowed; space and special characters are not.
Eg.: arm, x86_64
- a file in "config/arch/", named after the architecture's name, and suffixed
with ".in".
Eg.: config/arch/arm.in
- a file in "scripts/build/arch/", named after the architecture's name, and
suffixed with ".sh".
Eg.: scripts/build/arch/arm.sh
The architecture's ".in" file API:
> the config option "ARCH_%arch%" (where %arch% is to be replaced with the
actual architecture name).
That config option must have *neither* a type, *nor* a prompt! Also, it can
*not* depend on any other config option (EXPERIMENTAL is managed as above).
Eg.:
config ARCH_arm
+ mandatory:
defines a (terse) help entry for this architecture:
Eg.:
config ARCH_arm
help
The ARM architecture.
+ optional:
selects adequate associated config options.
Note: 64-bit architectures *shall* select ARCH_64
Eg.:
config ARCH_arm
select ARCH_SUPPORTS_BOTH_ENDIAN
select ARCH_DEFAULT_LE
help
The ARM architecture.
Eg.:
config ARCH_x86_64
select ARCH_64
help
The x86_64 architecture.
> other target-specific options, at your discretion. Note however that to
avoid name-clashing, such options shall be prefixed with "ARCH_%arch%",
where %arch% is again replaced by the actual architecture name.
(Note: due to historical reasons, and lack of time to clean up the code,
I may have left some config options that do not completely conform to
this, as the architecture name was written all upper case. However, the
prefix is unique among architectures, and does not cause harm).
The architecture's ".sh" file API:
> the function "CT_DoArchTupleValues"
+ parameters: none
+ environment:
- all variables from the ".config" file,
- the two variables "target_endian_eb" and "target_endian_el" which are
the endianness suffixes
+ return value: 0 upon success, !0 upon failure
+ provides:
- mandatory
- the environment variable CT_TARGET_ARCH
- contains:
the architecture part of the target tuple.
Eg.: "armeb" for big endian ARM
"i386" for an i386
+ provides:
- optional
- the environment variable CT_TARGET_SYS
- contains:
the system part of the target tuple.
Eg.: "gnu" for glibc on most architectures
"gnueabi" for glibc on an ARM EABI
- defaults to:
- for glibc-based toolchain: "gnu"
- for uClibc-based toolchain: "uclibc"
+ provides:
- optional
- the environment variables to configure the cross-gcc (defaults)
- CT_ARCH_WITH_ARCH : the gcc ./configure switch to select architecture level ( "--with-arch=${CT_ARCH_ARCH}" )
- CT_ARCH_WITH_ABI : the gcc ./configure switch to select ABI level ( "--with-abi=${CT_ARCH_ABI}" )
- CT_ARCH_WITH_CPU : the gcc ./configure switch to select CPU instruction set ( "--with-cpu=${CT_ARCH_CPU}" )
- CT_ARCH_WITH_TUNE : the gcc ./configure switch to select scheduling ( "--with-tune=${CT_ARCH_TUNE}" )
- CT_ARCH_WITH_FPU : the gcc ./configure switch to select FPU type ( "--with-fpu=${CT_ARCH_FPU}" )
- CT_ARCH_WITH_FLOAT : the gcc ./configure switch to select floating point arithmetics ( "--with-float=soft" or /empty/ )
+ provides:
- optional
- the environment variables to pass to the cross-gcc to build target binaries (defaults)
- CT_ARCH_ARCH_CFLAG : the gcc switch to select architecture level ( "-march=${CT_ARCH_ARCH}" )
- CT_ARCH_ABI_CFLAG : the gcc switch to select ABI level ( "-mabi=${CT_ARCH_ABI}" )
- CT_ARCH_CPU_CFLAG : the gcc switch to select CPU instruction set ( "-mcpu=${CT_ARCH_CPU}" )
- CT_ARCH_TUNE_CFLAG : the gcc switch to select scheduling ( "-mtune=${CT_ARCH_TUNE}" )
- CT_ARCH_FPU_CFLAG : the gcc switch to select FPU type ( "-mfpu=${CT_ARCH_FPU}" )
- CT_ARCH_FLOAT_CFLAG : the gcc switch to choose floating point arithmetics ( "-msoft-float" or /empty/ )
- CT_ARCH_ENDIAN_CFLAG : the gcc switch to choose big or little endian ( "-mbig-endian" or "-mlittle-endian" )
- default to:
see above.
+ provides:
- optional
- the environment variables to configure the core and final compiler, specific to this architecture:
- CT_ARCH_CC_CORE_EXTRA_CONFIG : additional, architecture specific core gcc ./configure flags
- CT_ARCH_CC_EXTRA_CONFIG : additional, architecture specific final gcc ./configure flags
- default to:
- all empty
+ provides:
- optional
- the architecture-specific CFLAGS and LDFLAGS:
- CT_ARCH_TARGET_CLFAGS
- CT_ARCH_TARGET_LDFLAGS
- default to:
- all empty
You can have a look at "config/arch/arm.in" and "scripts/build/arch/arm.sh" for
a quite complete example of what an actual architecture description looks like.
Kernel specific |
----------------+
A kernel is defined by:
- a human-readable name, in lower case letters, with numbers as appropriate.
The underscore is allowed; space and special characters are not (although
they are internally replaced with underscores.
Eg.: linux, bare-metal
- a file in "config/kernel/", named after the kernel name, and suffixed with
".in".
Eg.: config/kernel/linux.in, config/kernel/bare-metal.in
- a file in "scripts/build/kernel/", named after the kernel name, and suffixed
with ".sh".
Eg.: scripts/build/kernel/linux.sh, scripts/build/kernel/bare-metal.sh
The kernel's ".in" file must contain:
> an optional lines containing exactly "# EXPERIMENTAL", starting on the
first column, and without any following space or other character.
If this line is present, then this kernel is considered EXPERIMENTAL,
and correct dependency on EXPERIMENTAL will be set.
> the config option "KERNEL_%kernel_name%" (where %kernel_name% is to be
replaced with the actual kernel name, with all special characters and
spaces replaced by underscores).
That config option must have *neither* a type, *nor* a prompt! Also, it can
*not* depends on EXPERIMENTAL.
Eg.: KERNEL_linux, KERNEL_bare_metal
+ mandatory:
defines a (terse) help entry for this kernel.
Eg.:
config KERNEL_bare_metal
help
Build a compiler for use without any kernel.
+ optional:
selects adequate associated config options.
Eg.:
config KERNEL_bare_metal
select BARE_METAL
help
Build a compiler for use without any kernel.
> other kernel specific options, at your discretion. Note however that, to
avoid name-clashing, such options should be prefixed with
"KERNEL_%kernel_name%", where %kernel_name% is again tp be replaced with
the actual kernel name.
(Note: due to historical reasons, and lack of time to clean up the code,
I may have left some config options that do not completely conform to
this, as the kernel name was written all upper case. However, the prefix
is unique among kernels, and does not cause harm).
The kernel's ".sh" file API:
> is a bash script fragment
> defines the function CT_DoKernelTupleValues
+ see the architecture's CT_DoArchTupleValues, except for:
+ set the environment variable CT_TARGET_KERNEL, the kernel part of the
target tuple
+ return value: ignored
> defines the function "do_kernel_get":
+ parameters: none
+ environment:
- all variables from the ".config" file.
+ return value: 0 for success, !0 for failure.
+ behavior: download the kernel's sources, and store the tarball into
"${CT_TARBALLS_DIR}". To this end, a functions is available, that
abstracts downloading tarballs:
- CT_DoGet <tarball_base_name> <URL1 [URL...]>
Eg.: CT_DoGet linux-2.6.26.5 ftp://ftp.kernel.org/pub/linux/kernel/v2.6
Note: retrieving sources from svn, cvs, git and the likes is not supported
by CT_DoGet. For now, you'll have to do this by hand.
> defines the function "do_kernel_extract":
+ parameters: none
+ environment:
- all variables from the ".config" file,
+ return value: 0 for success, !0 for failure.
+ behavior: extract the kernel's tarball into "${CT_SRC_DIR}", and apply
required patches. To this end, a function is available, that abstracts
extracting tarballs:
- CT_ExtractAndPatch <tarball_base_name>
Eg.: CT_ExtractAndPatch linux-2.6.26.5
> defines the function "do_kernel_headers":
+ parameters: none
+ environment:
- all variables from the ".config" file,
+ return value: 0 for success, !0 for failure.
+ behavior: install the kernel headers (if any) in "${CT_SYSROOT_DIR}/usr/include"
> defines any kernel-specific helper functions
These functions, if any, must be prefixed with "do_kernel_%CT_KERNEL%_",
where '%CT_KERNEL%' is to be replaced with the actual kernel name, to avoid
any name-clashing.
You can have a look at "config/kernel/linux.in" and "scripts/build/kernel/linux.sh"
as an example of what a complex kernel description looks like.
Adding a new version of a component |
------------------------------------+
When a new component, such as the Linux kernel, gcc or any other is released,
adding the new version to crosstool-NG is quite easy. There is a script that
will do all that for you:
scripts/addToolVersion.sh
Run it with no option to get some help.
Build scripts |
--------------+
To Be Written later...

View File

@ -1,253 +0,0 @@
File.........: 9 - Build procedure overview.txt
Copyright....: (C) 2011 Yann E. MORIN <yann.morin.1998@free.fr>
License......: Creative Commons Attribution Share Alike (CC-by-sa), v2.5
How is a toolchain constructed? /
_______________________________/
This is the result of a discussion with Francesco Turco <mail@fturco.org>:
http://sourceware.org/ml/crossgcc/2011-01/msg00060.html
Francesco has a nice tutorial for beginners, along with a sample, step-by-
step procedure to build a toolchain for an ARM target from an x86_64 Debian
host:
http://fturco.org/wiki/doku.php?id=debian:cross-compiler
Thank you Francesco for initiating this!
I want a cross-compiler! What is this toolchain you're speaking about? |
-----------------------------------------------------------------------+
A cross-compiler is in fact a collection of different tools set up to
tightly work together. The tools are arranged in a way that they are
chained, in a kind of cascade, where the output from one becomes the
input to another one, to ultimately produce the actual binary code that
runs on a machine. So, we call this arrangement a "toolchain". When
a toolchain is meant to generate code for a machine different from the
machine it runs on, this is called a cross-toolchain.
So, what are those components in a toolchain? |
----------------------------------------------+
The components that play a role in the toolchain are first and foremost
the compiler itself. The compiler turns source code (in C, C++, whatever)
into assembly code. The compiler of choice is the GNU compiler collection,
well known as 'gcc'.
The assembly code is interpreted by the assembler to generate object code.
This is done by the binary utilities, such as the GNU 'binutils'.
Once the different object code files have been generated, they got to get
aggregated together to form the final executable binary. This is called
linking, and is achieved with the use of a linker. The GNU 'binutils' also
come with a linker.
So far, we get a complete toolchain that is capable of turning source code
into actual executable code. Depending on the Operating System, or the lack
thereof, running on the target, we also need the C library. The C library
provides a standard abstraction layer that performs basic tasks (such as
allocating memory, printing output on a terminal, managing file access...).
There are many C libraries, each targeted to different systems. For the
Linux /desktop/, there is glibc or even uClibc, for embedded Linux,
you have a choice of uClibc, while for system without an Operating
System, you may use newlib, dietlibc, or even none at all. There a few other
C libraries, but they are not as widely used, and/or are targeted to very
specific needs (eg. klibc is a very small subset of the C library aimed at
building constrained initial ramdisks).
Under Linux, the C library needs to know the API to the kernel to decide
what features are present, and if needed, what emulation to include for
missing features. That API is provided by the kernel headers. Note: this
is Linux-specific (and potentially a very few others), the C library on
other OSes do not need the kernel headers.
And now, how do all these components chained together? |
-------------------------------------------------------+
So far, all major components have been covered, but yet there is a specific
order they need to be built. Here we see what the dependencies are, starting
with the compiler we want to ultimately use. We call that compiler the
'final compiler'.
- the final compiler needs the C library, to know how to use it,
but:
- building the C library requires a compiler
A needs B which needs A. This is the classic chicken'n'egg problem... This
is solved by building a stripped-down compiler that does not need the C
library, but is capable of building it. We call it a bootstrap, initial, or
core compiler. So here is the new dependency list:
- the final compiler needs the C library, to know how to use it,
- building the C library requires a core compiler
but:
- the core compiler needs the C library headers and start files, to know
how to use the C library
B needs C which needs B. Chicken'n'egg, again. To solve this one, we will
need to build a C library that will only install its headers and start
files. The start files are a very few files that gcc needs to be able to
turn on thread local storage (TLS) on an NPTL system. So now we have:
- the final compiler needs the C library, to know how to use it,
- building the C library requires a core compiler
- the core compiler needs the C library headers and start files, to know
how to use the C library
but:
- building the start files require a compiler
Geez... C needs D which needs C, yet again. So we need to build a yet
simpler compiler, that does not need the headers and does need the start
files. This compiler is also a bootstrap, initial or core compiler. In order
to differentiate the two core compilers, let's call that one "core pass 1",
and the former one "core pass 2". The dependency list becomes:
- the final compiler needs the C library, to know how to use it,
- building the C library requires a compiler
- the core pass 2 compiler needs the C library headers and start files,
to know how to use the C library
- building the start files requires a compiler
- we need a core pass 1 compiler
And as we said earlier, the C library also requires the kernel headers.
There is no requirement for the kernel headers, so end of story in this
case:
- the final compiler needs the C library, to know how to use it,
- building the C library requires a core compiler
- the core pass 2 compiler needs the C library headers and start files,
to know how to use the C library
- building the start files requires a compiler and the kernel headers
- we need a core pass 1 compiler
We need to add a few new requirements. The moment we compile code for the
target, we need the assembler and the linker. Such code is, of course,
built from the C library, so we need to build the binutils before the C
library start files, and the complete C library itself. Also, some code
in gcc will turn to run on the target as well. Luckily, there is no
requirement for the binutils. So, our dependency chain is as follows:
- the final compiler needs the C library, to know how to use it, and the
binutils
- building the C library requires a core pass 2 compiler and the binutils
- the core pass 2 compiler needs the C library headers and start files,
to know how to use the C library, and the binutils
- building the start files requires a compiler, the kernel headers and the
binutils
- the core pass 1 compiler needs the binutils
Which turns in this order to build the components:
1 binutils
2 core pass 1 compiler
3 kernel headers
4 C library headers and start files
5 core pass 2 compiler
6 complete C library
7 final compiler
Yes! :-) But are we done yet?
In fact, no, there are still missing dependencies. As far as the tools
themselves are involved, we do not need anything else.
But gcc has a few pre-requisites. It relies on a few external libraries to
perform some non-trivial tasks (such as handling complex numbers in
constants...). There are a few options to build those libraries. First, one
may think to rely on a Linux distribution to provide those libraries. Alas,
they were not widely available until very, very recently. So, if the distro
is not too recent, chances are that we will have to build those libraries
(which we do below). The affected libraries are:
- the GNU Multiple Precision Arithmetic Library, GMP
- the C library for multiple-precision floating-point computations with
correct rounding, MPFR
- the C library for the arithmetic of complex numbers, MPC
The dependencies for those libraries are:
- MPC requires GMP and MPFR
- MPFR requires GMP
- GMP has no pre-requisite
So, the build order becomes:
1 GMP
2 MPFR
3 MPC
4 binutils
5 core pass 1 compiler
6 kernel headers
7 C library headers and start files
8 core pass 2 compiler
9 complete C library
10 final compiler
Yes! Or yet some more?
This is now sufficient to build a functional toolchain. So if you've had
enough for now, you can stop here. Or if you are curious, you can continue
reading.
gcc can also make use of a few other external libraries. These additional,
optional libraries are used to enable advanced features in gcc, such as
loop optimisation (GRAPHITE) and Link Time Optimisation (LTO). If you want
to use these, you'll need three additional libraries:
To enable GRAPHITE:
- the Interger Set Library, ISL
- the Chunky Loop Generator, CLooG
To enable LTO:
- the ELF object file access library, libelf
The dependencies for those libraries are:
- ISL requires GMP
- CLooG requires GMP and ISL
- libelf has no pre-requisites
The list now looks like (optional libs with a *):
1 GMP
2 MPFR
3 MPC
4 ISL *
5 CLooG *
6 libelf *
7 binutils
8 core pass 1 compiler
9 kernel headers
10 C library headers and start files
11 core pass 2 compiler
12 complete C library
13 final compiler
This list is now complete! Wouhou! :-)
So the list is complete. But why does crosstool-NG have more steps? |
--------------------------------------------------------------------+
The already thirteen steps are the necessary steps, from a theoretical point
of view. In reality, though, there are small differences; there are three
different reasons for the additional steps in crosstool-NG.
First, the GNU binutils do not support some kinds of output. It is not possible
to generate 'flat' binaries with binutils, so we have to use another component
that adds this support: elf2flt. Another binary utility called sstrip has been
added. It allows for super-stripping the target binaries, although it is not
strictly required.
Second, crosstool-NG can also build some additional debug utilities to run on
the target. This is where we build, for example, the cross-gdb, the gdbserver
and the native gdb (the last two run on the target, the first runs on the
same machine as the toolchain). The others (strace, ltrace and DUMA)
are absolutely not related to the toolchain, but are nice-to-have stuff that
can greatly help when developing, so are included as goodies (and they are
quite easy to build, so it's OK; more complex stuff is not worth the effort
to include in crosstool-NG).

View File

@ -1,90 +0,0 @@
File.........: A - Credits.txt
Copyright....: (C) 2010 Yann E. MORIN <yann.morin.1998@free.fr>
License......: Creative Commons Attribution Share Alike (CC-by-sa), v2.5
Credits /
________/
I would like to thank these fine people for making crosstool-NG possible:
Dan KEGEL, the original author of crosstool: http://www.kegel.com/
Dan was very helpfull and willing to help when I build my first toolchains.
I owe him one. Thank you Dan!
Some crosstool-NG scripts have code snippets coming almost as-is from the
original work by Dan.
And in order of appearance on the crossgcc ML:
Allan CLARK for his investigations on building toolchains on MacOS-X.
Allan made extensive tests of the first alpha of crosstool-NG on his
MacOS-X, and unveiled some bash-2.05 weirdness.
Enrico WEIGELT
- some improvements to the build procedure
- cxa_atexit disabling for C libraries not supporting it (old uClibc)
- misc suggestions (restartable build, ...)
- get rid of some bashisms in ./configure
- contributed OpenRISC or32 support
Robert P. J. DAY:
- some small improvements to the configurator, misc prompting glitches
- 'sanitised' patches for binutils-2.17
- patches for glibc-2.5
- misc patches, typos and eye candy
- too many to list any more!
Al Stone:
- initial ia64 support
- some cosmetics
Szilveszter Ordog:
- a uClibc floating point fix
- initial support for ARM EABI
Mark Jonas:
- initiated Super-H port
Michael Abbott:
- make it build with ancient findutils
Willy Tarreau:
- a patch to glibc to build on 'ancient' shells
- reported mis-use of $CT_CC_NATIVE
Matthias Kaehlcke:
- fix building glibc-2.7 (and 2.6.1) with newer kernels
Daniel Dittmann:
- PowerPC support
Ioannis E. Venetis:
- preliminary Alpha support
- intense gcc-4.3 brainstorming
Thomas Jourdan:
- intense gcc-4.3 brainstorming
- eglibc support
Konrad Eisele:
- initial multlilib support:
http://sourceware.org/ml/crossgcc/2011-11/msg00040.html
Many others have contributed, either in form of patches, suggestions,
comments, or testing... Thank you to all of you!
Special dedication to the buildroot people for maintaining a set of patches I
happily and shamelessly vampirise from time to time... :-)
20100530: Status of this file
It's been about a year now that we've moved the repository to Mercurial.
The repository now has proper authorship for each changeset, and this is
used to build the changelog at each release. This file will probably no
longer be updated, and is here to credit people prior to the Mercurial
migration, or for people discussing ideas or otherwise helping without
code.
If you think you deserve being cited in this file, do yell at me! ;-)

View File

@ -1,254 +0,0 @@
File.........: B - Known issues.txt
Copyright....: (C) 2010 Yann E. MORIN <yann.morin.1998@free.fr>
License......: Creative Commons Attribution Share Alike (CC-by-sa), v2.5
Known issues /
_____________/
This files lists the known issues encountered while developing crosstool-NG,
but that could not be addressed before the release.
The file has one section for each known issue, each section containing four
sub-sections: Symptoms, Explanations, Fix, and Workaround.
Each section is separated from the others with a lines of at least 4 dashes.
The following dummy section explains it all.
--------------------------------
Symptoms:
A one- or two-liner of what you would observe.
Usually, the error message you would see in the build logs.
Explanations:
An as much as possible in-depth explanations of the context, why it
happens, what has been investigated so far, and possible orientations
as how to try to solve this (eg. URLs, code snippets...).
Status:
Tells about the status of the issue:
UNCONFIRMED : missing information, or unable, to reproduce, but there
is consensus that there is an issue somewhere...
CURRENT : the issue is applicable.
DEPRECATED : the issue used to apply in some cases, but has not been
confirmed or reported again lately.
CLOSED : the issue is no longer valid, and a fix has been added
either as a patch to this component, and/or as a
workaround in the scripts and/or the configuration.
Fix:
What you have to do to fix it, if at all possible.
The fact that there is a fix, and yet this is a known issue means that
time to incorporate the fix in crosstool-NG was missing, or planned for
a future release.
Workaround:
What you can do to fix it *temporarily*, if at all possible.
A workaround is not a real fix, as it can break other parts of
crosstool-NG, but at least makes you going in your particular case.
So now, on for the real issues...
--------------------------------
Symptoms:
gcc is not found, although I *do* have gcc installed.
Explanations:
This is an issue on at least RHEL systems, where gcc is a symlink to ccache.
Because crosstool-NG create links to gcc for the build and host environment,
those symlinks are in fact pointing to ccache, which then doesn't know how
to run the compiler.
A possible fix could probably set the environment variable CCACHE_CC to the
actual compiler used.
Status:
CURRENT
Fix:
None known.
Workaround:
Uninstall ccache.
--------------------------------
Symptoms:
The extract and/or path steps fail under Cygwin.
Explanations:
This is not related to crosstool-NG. Mounts under Cygwin are by default not
case-sensitive. You have to change a registry setting to disable
case-insensitivity. See:
http://cygwin.com/faq.html section 4, question 30.
Status:
DEPRECATED
Fix:
Change the registry value as per the instructions on the Cygwin website.
Workaround:
None.
--------------------------------
Symptoms:
uClibc fails to build under Cygwin.
Explanations:
With uClibc, it is possible to build a cross-ldd. Unfortunately, it is
not (currently) possible to build this cross-ldd under Cygwin.
Status:
DEPRECATED
Fix:
None so far.
Workaround:
Disable the cross-ldd build.
--------------------------------
Symptoms:
On 64-bit build systems, the glibc build fails for
64-bit targets, because it can not find libgcc.
Explanations:
This issue has been observed when the companion libraries are built
statically. For an unknown reason, in this case, the libgcc built by the
core gcc is not located in the same place it is located when building
with shared companion libraries.
Status:
DEPRECATED
Fix:
None so far.
Workaround:
Build shared companion libraries.
--------------------------------
Symptoms:
libtool.m4: error: problem compiling FC test program
Explanations:
The gcc build procedure tries to run a Fortran test to see if it has a
working native fortran compiler installed on the build machine, and it
can't find one. A native Fortran compiler is needed (seems to be needed)
to build the Fortran frontend of the cross-compiler.
Even if you don't want to build the Fortran frontend, gcc tries to see
if it has one, but fails. This is no problem, as the Fortran frontend
will not be built. There is nothing to be worry about (unless you do
want to build the Fortran frontend, of course).
Status:
CURRENT
Fix:
None so far. It's a spurious error, so there will probably never be
a fix for this issue.
Workaround:
None needed, it's a spurious error.
--------------------------------
Symptoms:
unable to detect the exception model
Explanations:
On some architectures, proper stack unwinding (C++) requires that
setjmp/longjmp (sjlj) be used, while on other architectures do not
need sjlj. On some architectures, gcc is unable to determine whether
sjlj are needed or not.
Status:
CURRENT
Fix:
None so far.
Workaround:
Trying setting use of sjlj to either 'Y' or 'N' (instead of the
default 'M') in the menuconfig, option CT_CC_GCC_SJLJ_EXCEPTIONS
labelled "Use sjlj for exceptions".
--------------------------------
Symptoms:
configure: error: forced unwind support is required
Explanations:
The issue seems to be related to building NPTL on old versions
of glibc on some architectures (seen on powerpc, s390, s390x and x86_64).
Status:
CURRENT
Fix:
None so far. It would require some glibc hacking.
Workaround:
Try setting "Force unwind support" in the "C-library" menu.
--------------------------------
Symptoms:
glibc start files and headers fail with: [/usr/include/limits.h] Error 1
Explanations:
Old glibc Makefiles break with make-3.82.
Status:
CURRENT
Fix:
None so far. It would require some glibc hacking.
Workaround:
There two possible workarounds:
1- ask crosstool-NG to build make-3.81 just for this build session:
Select the following options:
Paths and misc options --->
[*] Try features marked as EXPERIMENTAL
Companion tools --->
[*] Build some companion tools
[*] make
2- manually install make-3.81 to take precedence over the system make.
--------------------------------
Symptoms:
The build fails with "mixed implicit and normal rules. Stop."
Explanations:
Old glibc Makefiles break with make-3.82.
Status:
CURRENT
Fix:
None so far. See above issue.
Workaround:
See above issue.
--------------------------------
Symptoms:
On x86_64 hosts with 32bit userspace the GMP build fails with:
configure: error: Oops, mp_limb_t is 32 bits, but the assembler code
in this configuration expects 64 bits.
You appear to have set $CFLAGS, perhaps you also need to tell GMP the
intended ABI, see "ABI and ISA" in the manual.
Explanations:
"uname -m" detects x86_64 but the build host is really x86.
Status:
CURRENT
Fix:
None so far. See above issue.
Workaround:
use "setarch i686 ct-ng build"
--------------------------------

View File

@ -1,403 +0,0 @@
File.........: C - Misc. tutorials.txt
Copyright....: (C) 2010 Yann E. MORIN <yann.morin.1998@free.fr>
License......: Creative Commons Attribution Share Alike (CC-by-sa), v2.5
Misc. tutorials /
________________/
Using crosstool-NG on FreeBSD (and other *BSD) |
-----------------------------------------------+
Contributed by: Titus von Boxberg
Prerequisites and instructions for using ct-ng for building a cross toolchain on FreeBSD as host.
0) Tested on FreeBSD 8.0
1) Install (at least) the following ports
archivers/lzma
textproc/gsed
devel/gmake
devel/patch
shells/bash
devel/bison
lang/gawk
devel/automake110
ftp/wget
Of course, you should have /usr/local/bin in your PATH.
2) run ct-ng's configure with the following tool configuration:
./configure --with-sed=/usr/local/bin/gsed --with-make=/usr/local/bin/gmake \
--with-patch=/usr/local/bin/gpatch
[...other configure parameters as you like...]
3) proceed as described in general documentation
but use gmake instead of make
Using crosstool-NG on MacOS-X |
------------------------------+
Contributed by: Titus von Boxberg
Prerequisites and instructions for using crosstool-NG for building a cross
toolchain on MacOS as host.
0) Mac OS Snow Leopard, with Developer Tools 3.2 installed, or
Mac OS Leopard, with Developer Tools & newer gcc (>= 4.3) installed
via macports
1) You have to use a case sensitive file system for ct-ng's build and target
directories. Use a disk or disk image with a case sensitive fs that you
mount somewhere.
2) Install macports (or similar easy means of installing 3rd party software),
make sure that macport's bin dir is in the front (!) of your PATH.
Furtheron assuming it is /opt/local/bin.
3) Install (at least) the following macports
lzmautils
libtool
binutils
gsed
gawk
gcc43 (only necessary for Leopard OSX 10.5)
gcc_select (only necessary for OSX 10.5, or Xcode > 4)
4) Prerequisites
On Leopard, make sure that the macport's gcc is called with the default
commands (gcc, g++,...), via macport's gcc_select
On OSX 10.7 Lion / when using Xcode >= 4 make sure that the default commands
(gcc, g++, etc.) point to gcc-4.2, NOT llvm-gcc-4.2
by using macport's gcc_select feature. With MacPorts >= 1.9.2
the command is: "sudo port select --set gcc gcc42"
This also requires (like written above) that macport's bin dir
comes before standard directories in your PATH environment variable
because the gcc symlink is installed in /opt/local/bin and the default /usr/bin/gcc
is not removed by the gcc select command!
Explanation: llvm-gcc-4.2 (with Xcode 4.1 it is on my machine
"gcc version 4.2.1 (Based on Apple Inc. build 5658) (LLVM build 2335.15.00)")
cannot boostrap gcc. See http://llvm.org/bugs/show_bug.cgi?id=9571
5) run ct-ng's configure with the following tool configuration
(assuming you have installed the tools via macports in /opt/local):
./configure --with-sed=/opt/local/bin/gsed \
--with-libtool=/opt/local/bin/glibtool \
--with-libtoolize=/opt/local/bin/glibtoolize \
--with-objcopy=/opt/local/bin/gobjcopy \
--with-objdump=/opt/local/bin/gobjdump \
--with-readelf=/opt/local/bin/greadelf \
--with-grep=/opt/local/bin/ggrep \
[...other configure parameters as you like...]
6) proceed as described in standard documentation
-----
HINTS:
- Apparently, GNU make's builtin variable .LIBPATTERNS is misconfigured
under MacOS: It does not include lib%.dylib.
This affects build of (at least) gdb-7.1
Put 'lib%.a lib%.so lib%.dylib' as .LIBPATTERNS into your environment
before executing ct-ng build.
See http://www.gnu.org/software/make/manual/html_node/Libraries_002fSearch.html
as an explanation.
- ct-ng menuconfig will not work on Snow Leopard 10.6.3 since libncurses
is broken with this release. MacOS <= 10.6.2 and >= 10.6.4 are ok.
Using Mercurial to hack crosstool-NG |
-------------------------------------+
NOTE: this section was applicable as long as when we were using Mercurial (Hg)
as the DVCS. Now we've switched to git, this section is no longer current. We
keep it as a reference, since it still contains a few useful hints.
Please help rewrite this section. ;-)
Contributed by: Titus von Boxberg
PREREQUISITES:
Configuring Mercurial:
You need mercurial with the following extensions:
- mq : http://mercurial.selenic.com/wiki/MqExtension
- patchbomb : http://mercurial.selenic.com/wiki/PatchbombExtension
Usually, these two extensions are already part of the installation package.
The mq extension maintains a separate queue of your local changes
that you can change at any later time.
With the patchbomb extension you can email those patches directly
from your local repo.
Your configuration file for mercurial, e.g. ~/.hgrc should contain
at least the following sections (but have a look at `man hgrc`):
# ---
[email]
# configure sending patches directly via Mercurial
from = "Your Name" <your@email.address>
# How to send email:
method = smtp
[smtp]
# SMTP configuration (only for method=smtp)
host = localhost
tls = true
username =
password =
[extensions]
# The following lines enable the two extensions:
hgext.mq =
hgext.patchbomb =
# ----
Create your local repository as a clone:
hg clone http://crosstool-ng.org/hg/crosstool-ng crosstool-ng
Setting up the mq extension in your local copy:
cd crosstool-ng
hg qinit
CREATING PATCHES:
Recording your changes in the patch queue maintained by mq:
# First, create a new patch entry in the patch queue:
hg qnew -D -U -e short_patch_name1
<edit patch description as commit message (see below for an example)>
<now edit the ct-ng sources and check them>
# if you execute `hg status` here, your modifications of the working
# copy should show up.
# Now the following command takes your modifications from the working copy
# into the patch entry
hg qrefresh -D [-e]
<reedit patch description [-e] if desired>
# Now your changes are recorded, and `hg status` should show a clean
# working copy
Repeat the above steps for all your modifications.
The command `hg qseries` informs you about the content of your patch queue.
CONTRIBUTING YOUR PATCHES:
Once you are satisfied with your patch series, you can (you should!)
contribute them back to upstream.
This is easily done using the `hg email` command.
`hg email` sends your new changesets to a specified list of recipients,
each patch in its own email, all ordered in the way you entered them (oldest
first). The command line flag --outgoing selects all changesets that are in
your local but not yet in the upstream repository. Here, these are exactly
the ones you entered into your local patch queue in the section above, so
--outgoing is what you want.
Each email gets the subject set to: "[PATCH x of n] <series summary>"
where 'x' is the serial number in the email series, and 'n' is the total number
of patches in the series. The body of the email is the complete patch, plus
a handful of metadata, that helps properly apply the patch, keeping the log
message, attribution and date, tracking file changes (move, delete, modes...)
`hg email` also threads all outgoing patch emails below an introductory
message. You should use the introductory message (command line flag --intro)
to describe the scope and motivation for the whole patch series. The subject
for the introductory message gets set to: "[PATCH 0 of n] <series summary>"
and you get the chance to set the <series summary>.
Here is a sample `hg email` complete command line:
Note: replace " (at) " with "@"
hg email --outgoing --intro \
--to '"Yann E. MORIN" <yann.morin.1998 (at) free.fr>' \
--cc 'crossgcc (at) sourceware.org'
# It then opens an editor and lets you enter the subject
# and the body for the introductory message.
Use `hg email` with the additional command line switch -n to
first have a look at the email(s) without actually sending them.
MAINTAINING YOUR PATCHES:
When the patches are refined by discussing them on the mailing list,
you may want to finalize and resend them.
The mq extension has the idiosyncrasy of imposing a stack onto the queue:
You can always reedit/refresh only the patch on top of stack.
The queue consists of applied and unapplied patches
(if you reached here via the above steps, all of your patches are applied),
where the 'stack' consists of the applied patches, and 'top of stack'
is the latest applied patch.
The following output of `hg qseries` is now used as an example:
0 A short_patch_name1
1 A short_patch_name2
2 A short_patch_name3
3 A short_patch_name4
You are now able to edit patch 'short_patch_name4' (which is top of stack):
<Edit the sources>
# and execute again
hg qrefresh -D [-e]
<and optionally [-e] reedit the commit message>
If you want to edit e.g. patch short_patch_name2, you have to modify
mq's stack so this patch gets top of stack.
For this purpose see `hg help qgoto`, `hg help qpop`, and `hg help qpush`.
hg qgoto short_patch_name2
# The patch queue should now look like
hg qseries
0 A short_patch_name1
1 A short_patch_name2
2 U short_patch_name3
3 U short_patch_name4
# so patch # 1 (short_patch_name2) is top of stack.
<now reedit the sources for short_patch_name2>
# and execute again
hg qrefresh -D [-e]
<and optionally [-e] reedit the commit message>
# the following command reapplies the now unapplied two patches:
hg qpush -a
# you can also use `hg qgoto short_patch_name4` to get there again.
RESENDING YOUR REEDITED PATCHES:
By mailing list policy, please resend your complete patch series.
--> Go back to section "CONTRIBUTING YOUR PATCHES" and resubmit the full set.
SYNCING WITH UPSTREAM AGAIN:
You can sync your repo with upstream at any time by executing
# first unapply all your patches:
hg qpop -a
# next fetch new changesets from upstream
hg pull
# then update your working copy
hg up
# optionally remove already upstream integrated patches (see below)
hg qdelete <short_name_of_already_applied_patch>
# and reapply your patches if any non upstream-integrated left (but see below)
hg qpush -a
Eventually, your patches get included into the upstream repository
which you initially cloned.
In this case, before executing the hg qpush -a from above
you should manually "hg qdelete" the patches that are already integrated upstream.
HOW TO FORMAT COMMIT MESSAGES (aka patch descriptions):
Commit messages should look like (without leading pipes):
|component: short, one-line description
|
|optional longer description
|on multiple lines if needed
|
|Signed-off-by: as documented in section 7 of ct-ng's documentation
Here is an example commit message (see revision 8bb5151c5b01):
kernel/linux: fix type in version strings
I missed refreshing the patch before pushing. :-(
Signed-off-by: "Yann E. MORIN" <yann.morin.1998@free.fr>
Using crosstool-NG on Windows |
------------------------------+
Contributed by: Ray Donnelly
Prerequisites and instructions for using crosstool-NG for building a cross
toolchain on Windows (Cygwin) as build and, optionally Windows (hereafter)
MinGW-w64 as host.
0. Use Cygwin64 if you can. DLL base-address problems are lessened that
way and if you bought a 64-bit CPU, you may as well use it.
1. You must enable Case Sensitivity in the Windows Kernel (this is only really
necessary for Linux targets, but at present, crosstool-ng refuses to operate
on case insensitive filesystems). The registry key for this is:
HKLM\SYSTEM\CurrentControlSet\Control\Session Manager\kernel\obcaseinsensitive
Read more at:
https://cygwin.com/cygwin-ug-net/using-specialnames.html
2. Using setup{,-x86_64}.exe, install the default packages and also the
following ones: (tested versions in brackets, please test newer versions
and report successes via pull requests changing this list and failures to:
https://github.com/crosstool-ng/crosstool-ng/issues
autoconf (13-1), make (4.1-1), gcc-g++ (4.9.3-1), gperf (3.0.4-2),
bison (3.0.4-1), flex (2.5.39-1), texinfo (6.0-1), wget (1.16.3-1),
patch (2.7.4-1), libtool (2.4.6-2), automake (9-1), diffutils (3.3-3),
libncurses-devel (6.0-1.20151017), help2man (1.44.1-1)
mingw64-i686-gcc-g++* (4.9.2-2), mingw64-x86_64-gcc-g++* (4.9.2-2)
Leave "Select required packages (RECOMMENDED)" ticked.
Notes:
2.1 The packages marked with * are only needed if your host is MinGW-w64.
2.2 Unfortunately, wget pulls in an awful lot of dependencies, including
Python 2.7, Ruby, glib and Tcl.
3. Although nativestrict symlinks seem like the best idea, extracting glibc fails
when they are enabled, so just don't set anything here. If your host is MinGW-w64
then these 'Cygwin-special' symlinks won't work, but you can dereference them by
using tar options --dereference and --hard-dereference when making a final tarball.
I plan to investigate and fix or at least work around the extraction problem.
Read more at:
https://cygwin.com/cygwin-ug-net/using-cygwinenv.html
4. collect2.exe will attempt to run ld which is a shell script that runs either
ld.exe or gold.exe so you need to make sure that a working shell is in your path.
Eventually I will replace this with a native program for MinGW-w64 host.
Using crosstool-NG to build Xtensa toolchains |
----------------------------------------------+
Contributed by: Max Filippov
Xtensa cores are highly configurable: endianness, instruction set, register set
of a core is chosen at processor configuration time. New registers and
instructions may be added by designers, making each core configuration unique.
Toolchain components cannot know about features of each individual core and
need to be configured in order to be compatible with particular architecture
variant. This configuration includes:
- definitions of instruction formats, names and properties for assembler,
disassembler and debugger;
- definitions of register names and properties for assembler, disassembler and
debugger;
- selection of predefined features, such as endianness, presence of certain
processor options or instructions for compiler, debugger C library and OS
kernels;
- macros with instruction sequences for saving and restoring special, user or
coprocessor registers for OS kernels.
This configuration is provided in form of source files, that must replace
corresponding files in binutils, gcc, gdb or newlib source trees or be added
to OS kernel source tree. This set of files is usually distributed as archive
known as Xtensa configuration overlay.
Tensilica provides such an overlay as part of the processor download, however,
it needs to be reformatted to match the specific format required by the
crosstool-NG. For a script to convert the overlay file, and additional
information, please see
http://wiki.linux-xtensa.org/index.php/Toolchain_Overlay_File
The current version of crosstool-NG requires that the overlay file name has the
format xtensa_<CORE_NAME>.tar, where CORE_NAME can be any user selected name.
To make crosstool-NG use overlay file located at <PATH>/xtensa_<CORE_NAME>.tar
select XTENSA_CUSTOM, set config parameter CT_ARCH_XTENSA_CUSTOM_NAME to
CORE_NAME and CT_ARCH_XTENSA_CUSTOM_OVERLAY_LOCATION to PATH.
The fsf target architecture variant is the configuration provided by toolchain
components by default. It is present only for build-testing toolchain
components and is in no way special or universal.

1
docs/MANUAL_ONLINE Normal file
View File

@ -0,0 +1 @@
http://crosstool-ng.github.io/docs