Allow the user to configure extra flags to pass to the host compiler
at build time. Applies to both C and C++.
Useful on Ubuntu to turn off the stack protector and fortify defaults
so the program stands a better chance of running on other distros.
Signed-off-by: Michael Hope <michael.hope@linaro.org>
[yann.morin.1998@anciens.enib.fr: put the custom flags at the end]
Signed-off-by: "Yann E. MORIN" <yann.morin.1998@anciens.enib.fr>
The GOLD linker is written in C++. Pass CT_CFLAGS_FOR_HOST as
CXXFLAGS to configure so that any host specific flags are passed
through.
It feels a bit funny passing CFLAGS as CXXFLAGS, but the PPL and GCC
target rules already do the same.
Signed-off-by: Michael Hope <michael.hope@linaro.org>
When CT_PARALLEL_JOBS is -1, set the number of parallel jobs to the
number of online CPUs + 1. Update documentation to match.
I find this useful when building in the cloud. You can use the same
.config file and have the build adapt to the number of processors
available. Limited testing shows that NCPUS+1 is faster than NCPUS+0
or NCPUS+2.
Signed-off-by: Michael Hope <michael.hope@linaro.org>
Allows using either a tarball or a directory as the custom kernel
source location.
Signed-off-by: Vincent BENOIT <sinseman44@gmail.com>
[yann.morin.1998@anciens.enib.fr: fix space damage, detailed commit message]
Signed-off-by: "Yann E. MORIN" <yann.morin.1998@anciens.enib.fr>
Since kernel.org is dead, and there is no announced or known estimated
time or return to normality, it is impossible to download any kernel at
this time.
Add a known-working mirror.
Signed-off-by: "Yann E. MORIN" <yann.morin.1998@anciens.enib.fr>
Even if the current process is highly parallel, crosstool-NG spends most
of its time in single-job steps on fast machines (with a 12-CPU system,
I approximate the parallel vs. non-parallel time to be in the order os
1 to 3; that is crostool-NG spends two-thirds of its time running
non-parallel jobs).
Some steps to build gcc can be paralleled, gaining a litle bit of time
on the whole compilation.
Signed-off-by: "Yann E. MORIN" <yann.morin.1998@anciens.enib.fr>
Virtually all FTP server available on-line support passive FTP.
At least, this is the case for the servers crosstool-NG needs to
connect to.
Signed-off-by: "Yann E. MORIN" <yann.morin.1998@anciens.enib.fr>
Currently, we use either wget or curl, whichever is installed.
In case both are installed, both are used. This means that it
takes a while trying all extensions.
Remove use of wget, and use only curl.
Signed-off-by: "Yann E. MORIN" <yann.morin.1998@anciens.enib.fr>
The sysroot prefix dir was broken in #4960f5d9f829 due to a mishap
when making the out-of-sysroot lib/ symlink: the './' was mistakenly
changed into a single '.' .
Although Jonathan suggested restoring the missing '/' to restore it to
normal operation, I prefered using an explicit pushd/popd to be extra
sure of the symlink location and target, along with a fix in the sysroot
relative directory calculation.
Reported-by: Jonathan Grundon <JGrundon@xos.com>
Signed-off-by: "Yann E. MORIN" <yann.morin.1998@anciens.enib.fr>
Allows to choose if one wants to keep or not the syscalls that are provided with
newlib. It passes the --disable-newlib-supplied-syscalls or
--enable-newlib-supplied-syscalls to the configure script. If one chooses to
disable the builtin syscalls, he/she will have to write his/her own. This can
be usefull to port newlib to a new platform/board.
Signed-off-by: Kévin PETIT <kpet@free.fr>
Some packages are available as LZMA tarballs. LZMA is a relatively recent
compression algorithm; it's slightly better than bzip2, but offers much
faster decompression. LZMA is now deprecated in favor of XZ, but some
packages switched to LZMA when XZ was not yet available, or still in its
infancy. Latest XZ (which totaly obsoletes LZMA) offers a backward LZMA-
compatible utility, so we can check for 'lzma' nonetheless.
Signed-off-by: "Yann E. MORIN" <yann.morin.1998@anciens.enib.fr>
HOST_OS really is the target OS. Allow setting it for configure
via an environment variable.
libltrace.a should have an index:
Allow ar to be set as an environment variable, and generate
an index in this lib.
Reported-by: "Guylhem Aznar" <crossgcc@guylhem.net>
Signed-off-by: "Titus von Boxberg" <titus@v9g.de>
Because we need our own host tic, we have to build it; and we do build
it statically for now.
But as MacOS/Darwin/Whatever-you-call-it does not support static linking
(what a shame!), it fails.
Anyway, we don't really care it being shared, in the end.
Signed-off-by: "Yann E. MORIN" <yann.morin.1998@anciens.enib.fr>
The custom-tarball symlink was created in CT_SRC_DIR, when it
should be created in CT_TARBALLS_DIR.
Reported-by: Guylhem Aznar <crossgcc@guylhem.net>
Signed-off-by: "Yann E. MORIN" <yann.morin.1998@anciens.enib.fr>
The script that is installed, and which sole purpose is to dump
the .config that was used to build the toolchain, is pure insanity.
Let's make it much, much more simpler...
Signed-off-by: "Yann E. MORIN" <yann.morin.1998@anciens.enib.fr>
./configure does check for the presence of gz and bzip2, so we can
safely use them in the build scripts.
On the other hand, more recent formats (eg. XZ) are not yet widely
available, and we do not want, and can't, force the user to install
them as a pre-requisite.
So, build up a list of allowed tarball formats based on the available
decompressors. For no, this is a static list, but the upcoming XZ
support will conditionnaly add to this list.
Signed-off-by: "Yann E. MORIN" <yann.morin.1998@anciens.enib.fr>
Simplify the way the custom tarball is handled:
- fake version="custom"
- at download, simply link the custom tarball to:
"linux-custom.${custom_extension}"
- at extract, the above allows to simply extract "linux-${LINUX_VERSION}"
where LINUX_VERISON is set to the fake version="custom"
Not that much convoluted, in fact... :-/
Signed-off-by: "Yann E. MORIN" <yann.morin.1998@anciens.enib.fr>
Although ctor/dtor do not seem strictly required, missing them proves
rather inconvenient, as ld can't link binaries.
Reported-by: John Spencer <maillist-uclibc@barfooze.de> (sh4rm4 on IRC)
Signed-off-by: "Yann E. MORIN" <yann.morin.1998@anciens.enib.fr>
When downloading via svn/cvs/... an attempt to retrieve from the
mirror is made. If the mirror does not have the required tarball,
an error message is printed. This is misleading, as the download
may later succeed via svn/cvs/...
Remove the messages about failed downloads altogether.
At the same time, use "if ... then ... fi" instead of "... && ..."
Signed-off-by: "Yann E. MORIN" <yann.morin.1998@anciens.enib.fr>
This is needed later, when we'll conditionnally use both the
upstream and the mirror URLs.
Signed-off-by: "Yann E. MORIN" <yann.morin.1998@anciens.enib.fr>
Currently, the mirror can be used either:
- as a fallback in case upstream is unavailable (default behavior)
- as the preferred source for downloads
But the most common use-case seems to provide a truely-LAN mirror
to speed up downloads in big corpos', and/or provide a 'trusted'
source for the tarballs.
So, make the following changes;
- if a mirror is specified, always try that before trying upstream
Signed-off-by: "Yann E. MORIN" <yann.morin.1998@anciens.enib.fr>
The cvs download helper looks for the local tarballs dir to see if it
can find a pre-downloaded tarball, and if it does not find it, does
the actual fetch to upstream via cvs.
In the process, it does not even try to get a tarball from the local
mirror, which can be useful if the mirror has been pre-populated
manually (or with a previously downloaded tree).
Fake a tarball get with the standard tarball-download helper, but
without specifying any upstream URL, which makes the helper directly
try the LAN mirror.
Of course, if no mirror is specified, no URL wil be available, and
the standard cvs retrieval will kick in.
Signed-off-by: "Yann E. MORIN" <yann.morin.1998@anciens.enib.fr>
The svn download helper looks for the local tarballs dir to see if it
can find a pre-downloaded tarball, and if it does not find it, does
the actual fetch to upstream via svn.
In the process, it does not even try to get a tarball from the local
mirror, which can be useful if the mirror has been pre-populated
manually (or with a previously downloaded tree).
Fake a tarball get with the standard tarball-download helper, but
without specifying any upstream URL, which makes the helper directly
try the LAN mirror.
Of course, if no mirror is specified, no URL wil be available, and
the standard svn retrieval will kick in.
Reported-by: ANDY KENNEDY <ANDY.KENNEDY@adtran.com>
Signed-off-by: "Yann E. MORIN" <yann.morin.1998@anciens.enib.fr>
When retrieving tarballs from upstream, if no URL was given, do not
fail; simmply ignore that fact.
This will be used later when the SVN helper will call the standard
helper to try the LAN mirror before trying svn.
Signed-off-by: "Yann E. MORIN" <yann.morin.1998@anciens.enib.fr>
do_libc_locales_extract() and do_libc_locales() in glibc-eglibc.sh-common have
been overridden for both glibc and eglibc, so they can now be removed, which
this patch does.
Signed-off-by: "Benoît THÉBAUDEAU" <benoit.thebaudeau@advansee.com>
This patch adds partial support for glibc locales.
For now, it only generates the appropriate locales when the host and the target
have the same endianness and uint32_t alignment.
Signed-off-by: "Benoît THÉBAUDEAU" <benoit.thebaudeau@advansee.com>
This patch adds a common glibc/eglibc infrastructure to build and install the
libc locales.
Signed-off-by: "Benoît THÉBAUDEAU" <benoit.thebaudeau@advansee.com>
Someof the mingw32 source tarballs have an appended '-src' after the
version.
Since changeset #6e1412ba8da9 (scripts/functions: force extract folder
to archive basename), it means mingw tarballs get extracted in a directory
ending with '-src'.
Fix that.
Signed-off-by: "Yann E. MORIN" <yann.morin.1998@anciens.enib.fr>
Now that we akways extract the tarballs in a sane location (see changeset
#6e1412ba8da9: scripts/functions: force extract folder to archive basename),
the uClibc snapshot dir now has the date (as version) in it, eg.:
uClibc-20100710
Signed-off-by: "Yann E. MORIN" <yann.morin.1998@anciens.enib.fr>
Samples should contain kconfig-parsable definitions, not script variables.
.config.2 contains bash arrays, which is definitely not kconfig-safe...
Signed-off-by: "Yann E. MORIN" <yann.morin.1998@anciens.enib.fr>
Some archives like those of the 2011.07 revisions of Linaro GCC contain a folder
name different from the archive basename, which leads to errors afterwards, e.g.
when patching. E.g.:
gcc-linaro-4.5-2011.07.tar.bz2 extracts to gcc-linaro-4.5-2011.07-0/
This patch changes CT_Extract() to force the extraction of all archives to a
folder named like the archive basename. E.g.:
gcc-linaro-4.5-2011.07.tar.bz2 now extracts to gcc-linaro-4.5-2011.07/
Signed-off-by: "Benoît THÉBAUDEAU" <benoit.thebaudeau@advansee.com>
Currently, no --enable-add-ons option is passed to libc configure when
"$(do_libc_add_ons_list ,)" is empty, which makes configure automatically search
for present add-ons. In that case, all present add-ons are built, although
no add-on was selected by the user in the config. Moreover, this can make the
configure fail if some non-standard add-ons like eglibc-localedef are present.
This behavior also leads to an inconsistency from a user point of view between
the following cases:
- LIBC_ADDONS_LIST="", LIBC_GLIBC_USE_PORTS=n and THREADS="none" in the config,
which makes "$(do_libc_add_ons_list ,)" return "", so all present add-ons
are built.
- LIBC_ADDONS_LIST="", LIBC_GLIBC_USE_PORTS=n and THREADS!="none" in the
config, which makes "$(do_libc_add_ons_list ,)" return the add-on supporting
the chosen threading implementation, e.g. "nptl", so only this add-on is
built.
This patch disables the building of all add-ons in that case.
It is still possible to build all present add-ons by adding --enable-add-ons to
LIBC_GLIBC_EXTRA_CONFIG_ARRAY.
Signed-off-by: "Benoît THÉBAUDEAU" <benoit.thebaudeau@advansee.com>
gdb needs to know where to find the libstdc++ helper python script
to do, well, whatever it has to do with it...
We can't install that in the user's ~/.gdbinit, it's too complex to
handle all the cases. Moreover, if the user is using more than one
toolchain, we can't put all that stuff in there...
Just provide a sample config file the user can adapt to his/her
own needs.
Thanks go to Khem RAJ for providing such a hint:
http://sourceware.org/ml/crossgcc/2011-07/msg00026.html
Reported-by: ANDY KENNEDY <ANDY.KENNEDY@adtran.com>
CC: Khem Raj <raj.khem@gmail.com>
Signed-off-by: "Yann E. MORIN" <yann.morin.1998@anciens.enib.fr>
The place to get 3.x has changed; the version scheme has changed.
No need to be overkill, just support 3.x; 4.x is not even dreamt of.
Signed-off-by: "Yann E. MORIN" <yann.morin.1998@anciens.enib.fr>