usermode/library/malloc-original/src/dlmalloc.c

00001 /*
00002   This is a version (aka dlmalloc) of malloc/free/realloc written by
00003   Doug Lea and released to the public domain.  Use, modify, and
00004   redistribute this code without permission or acknowledgement in any
00005   way you wish.  Send questions, comments, complaints, performance
00006   data, etc to dl@cs.oswego.edu
00007 
00008 * VERSION 2.7.0 Sun Mar 11 14:14:06 2001  Doug Lea  (dl at gee)
00009 
00010    Note: There may be an updated version of this malloc obtainable at
00011            ftp://gee.cs.oswego.edu/pub/misc/malloc.c
00012          Check before installing!
00013 
00014 * Quickstart
00015 
00016   This library is all in one file to simplify the most common usage:
00017   ftp it, compile it (-O), and link it into another program. All
00018   of the compile-time options default to reasonable values for use on
00019   most unix platforms. Compile -DWIN32 for reasonable defaults on windows.
00020   You might later want to step through various compile-time and dynamic
00021   tuning options.
00022 
00023   For convenience, an include file for code using this malloc is at:
00024      ftp://gee.cs.oswego.edu/pub/misc/malloc-2.7.0.h
00025   You don't really need this .h file unless you call functions not
00026   defined in your system include files.  The .h file contains only the
00027   excerpts from this file needed for using this malloc on ANSI C/C++
00028   systems, so long as you haven't changed compile-time options about
00029   naming and tuning parameters.  If you do, then you can create your
00030   own malloc.h that does include all settings by cutting at the point
00031   indicated below.
00032 
00033 * Why use this malloc?
00034 
00035   This is not the fastest, most space-conserving, most portable, or
00036   most tunable malloc ever written. However it is among the fastest
00037   while also being among the most space-conserving, portable and tunable.
00038   Consistent balance across these factors results in a good general-purpose
00039   allocator for malloc-intensive programs.
00040 
00041   The main properties of the algorithms are:
00042   * For large (>= 512 bytes) requests, it is a pure best-fit allocator,
00043     with ties normally decided via FIFO (i.e. least recently used).
00044   * For small (<= 64 bytes by default) requests, it is a caching
00045     allocator, that maintains pools of quickly recycled chunks.
00046   * In between, and for combinations of large and small requests, it does
00047     the best it can trying to meet both goals at once.
00048   * For very large requests (>= 128KB by default), it relies on system
00049     memory mapping facilities, if supported.
00050 
00051   For a longer but slightly out of date high-level description, see
00052      http://gee.cs.oswego.edu/dl/html/malloc.html
00053 
00054   You may already by default be using a C library containing a malloc
00055   that is  based on some version of this malloc (for example in
00056   linux). You might still want to use the one in this file in order to
00057   customize settings or to avoid overheads associated with library
00058   versions.
00059 
00060 * Contents, described in more detail in "description of public routines" below.
00061 
00062   Standard (ANSI/SVID/...)  functions:
00063     malloc(size_t n);
00064     calloc(size_t n_elements, size_t element_size);
00065     free(Void_t* p);
00066     realloc(Void_t* p, size_t n);
00067     memalign(size_t alignment, size_t n);
00068     valloc(size_t n);
00069     mallinfo()
00070     mallopt(int parameter_number, int parameter_value)
00071 
00072   Additional functions:
00073     independent_calloc(size_t n_elements, size_t size, Void_t* chunks[]);
00074     independent_comalloc(size_t n_elements, size_t sizes[], Void_t* chunks[]);
00075     pvalloc(size_t n);
00076     cfree(Void_t* p);
00077     malloc_trim(size_t pad);
00078     malloc_usable_size(Void_t* p);
00079     malloc_stats();
00080 
00081 * Vital statistics:
00082 
00083   Supported pointer representation:       4 or 8 bytes
00084   Supported size_t  representation:       4 or 8 bytes 
00085        Note that size_t is allowed to be 4 bytes even if pointers are 8.
00086        You can adjust this by defining INTERNAL_SIZE_T
00087 
00088   Alignment:                              2 * sizeof(size_t) (default)
00089        (i.e., 8 byte alignment with 4byte size_t). This suffices for
00090        nearly all current machines and C compilers. However, you can
00091        define MALLOC_ALIGNMENT to be wider than this if necessary.
00092 
00093   Minimum overhead per allocated chunk:   4 or 8 bytes
00094        Each malloced chunk has a hidden word of overhead holding size
00095        and status information.
00096 
00097   Minimum allocated size: 4-byte ptrs:  16 bytes    (including 4 overhead)
00098                           8-byte ptrs:  24/32 bytes (including, 4/8 overhead)
00099 
00100        When a chunk is freed, 12 (for 4byte ptrs) or 20 (for 8 byte
00101        ptrs but 4 byte size) or 24 (for 8/8) additional bytes are
00102        needed; 4 (8) for a trailing size field and 8 (16) bytes for
00103        free list pointers. Thus, the minimum allocatable size is
00104        16/24/32 bytes.
00105 
00106        Even a request for zero bytes (i.e., malloc(0)) returns a
00107        pointer to something of the minimum allocatable size.
00108 
00109        The maximum overhead wastage (i.e., number of extra bytes
00110        allocated than were requested in malloc) is less than or equal
00111        to the minimum size, except for requests >= mmap_threshold that
00112        are serviced via mmap(), where the worst case wastage is 2 *
00113        sizeof(size_t) bytes plus the remainder from a system page (the
00114        minimal mmap unit); typically 4096 or 8192 bytes.
00115 
00116   Maximum allocated size:  4-byte size_t: 2^32 minus about two pages 
00117                            8-byte size_t: 2^64 minus about two pages
00118 
00119        It is assumed that (possibly signed) size_t values suffice to
00120        represent chunk sizes. `Possibly signed' is due to the fact
00121        that `size_t' may be defined on a system as either a signed or
00122        an unsigned type. The ISO C standard says that it must be
00123        unsigned, but a few systems are known not to adhere to this.
00124        Additionally, even when size_t is unsigned, sbrk (which is by
00125        default used to obtain memory from system) accepts signed
00126        arguments, and may not be able to handle size_t-wide arguments
00127        with negative sign bit.  Generally, values that would
00128        appear as negative after accounting for overhead and alignment
00129        are supported only via mmap(), which does not have this
00130        limitation.
00131 
00132        Requests for sizes outside the allowed range will perform an optional
00133        failure action and then return null. (Requests may also
00134        also fail because a system is out of memory.)
00135 
00136   Thread-safety: NOT thread-safe unless USE_MALLOC_LOCK defined
00137 
00138        When USE_MALLOC_LOCK is defined, wrappers are created to
00139        surround every public call with either a pthread mutex or
00140        a win32 spinlock (depending on WIN32). This is not
00141        especially fast, and can be a major bottleneck.
00142        It is designed only to provide minimal protection
00143        in concurrent environments, and to provide a basis for
00144        extensions.  If you are using malloc in a concurrent program,
00145        you would be far better off obtaining ptmalloc, which is
00146        derived from a version of this malloc, and is well-tuned for
00147        concurrent programs. (See http://www.malloc.de)
00148 
00149   Compliance: I believe it is compliant with the 1997 Single Unix Specification
00150        (See http://www.opennc.org). Also SVID/XPG, ANSI C, and probably 
00151        others as well.
00152 
00153 * Synopsis of compile-time options:
00154 
00155     People have reported using previous versions of this malloc on all
00156     versions of Unix, sometimes by tweaking some of the defines
00157     below. It has been tested most extensively on Solaris and
00158     Linux. It is also reported to work on WIN32 platforms.
00159     People also report using it in stand-alone embedded systems.
00160 
00161     The implementation is in straight, hand-tuned ANSI C.  It is not
00162     at all modular. (Sorry!)  It uses a lot of macros.  To be at all
00163     usable, this code should be compiled using an optimizing compiler
00164     (for example gcc -O3) that can simplify expressions and control
00165     paths. (FAQ: some macros import variables as arguments rather than
00166     declare locals because people reported that some debuggers
00167     otherwise get confused.)
00168 
00169     OPTION                     DEFAULT VALUE
00170 
00171     Compilation Environment options:
00172 
00173     __STD_C                    derived from C compiler defines
00174     WIN32                      NOT defined
00175     HAVE_MEMCPY                defined
00176     USE_MEMCPY                 1 if HAVE_MEMCPY is defined
00177     HAVE_MMAP                  defined as 1 
00178     MMAP_CLEARS                1
00179     HAVE_MREMAP                0 unless linux defined
00180     malloc_getpagesize         derived from system #includes, or 4096 if not
00181     HAVE_USR_INCLUDE_MALLOC_H  NOT defined
00182     LACKS_UNISTD_H             NOT defined unless WIN32
00183     LACKS_SYS_PARAM_H          NOT defined unless WIN32
00184     LACKS_SYS_MMAN_H           NOT defined unless WIN32
00185 
00186     Changing default word sizes:
00187 
00188     INTERNAL_SIZE_T            size_t
00189     MALLOC_ALIGNMENT           2 * sizeof(INTERNAL_SIZE_T)
00190 
00191     Configuration and functionality options:
00192 
00193     USE_DL_PREFIX              NOT defined
00194     USE_PUBLIC_MALLOC_WRAPPERS NOT defined
00195     USE_MALLOC_LOCK            NOT defined
00196     DEBUG                      NOT defined
00197     REALLOC_ZERO_BYTES_FREES   NOT defined
00198     MALLOC_FAILURE_ACTION      errno = ENOMEM, if __STD_C defined, else no-op
00199     TRIM_FASTBINS              0
00200 
00201     Options for customizing MORECORE:
00202 
00203     MORECORE                   sbrk
00204     MORECORE_CONTIGUOUS        1 
00205     MORECORE_CANNOT_TRIM       NOT defined
00206     MMAP_AS_MORECORE_SIZE      (1024 * 1024) 
00207 
00208     Tuning options that are also dynamically changeable via mallopt:
00209 
00210     DEFAULT_MXFAST             64
00211     DEFAULT_TRIM_THRESHOLD     128 * 1024
00212     DEFAULT_TOP_PAD            0
00213     DEFAULT_MMAP_THRESHOLD     128 * 1024
00214     DEFAULT_MMAP_MAX           65536
00215 
00216     There are several other #defined constants and macros that you
00217     probably don't want to touch unless you are extending or adapting malloc.
00218 */
00219 
00220 /*
00221   WIN32 sets up defaults for MS environment and compilers.
00222   Otherwise defaults are for unix.
00223 */
00224 
00225 #ifdef WIN32
00226 
00227 #define WIN32_LEAN_AND_MEAN
00228 #include <windows.h>
00229 
00230 /* Win32 doesn't supply or need the following headers */
00231 #define LACKS_UNISTD_H
00232 #define LACKS_SYS_PARAM_H
00233 #define LACKS_SYS_MMAN_H
00234 
00235 /* Use the supplied emulation of sbrk */
00236 #define MORECORE sbrk
00237 #define MORECORE_CONTIGUOUS 1
00238 #define MORECORE_FAILURE    ((void*)(-1))
00239 
00240 /* Use the supplied emulation of mmap and munmap */
00241 #define HAVE_MMAP 1
00242 #define MUNMAP_FAILURE  (-1)
00243 #define MMAP_CLEARS 1
00244 
00245 /* These values don't really matter in windows mmap emulation */
00246 #define MAP_PRIVATE 1
00247 #define MAP_ANONYMOUS 2
00248 #define PROT_READ 1
00249 #define PROT_WRITE 2
00250 
00251 /* Emulation functions defined at the end of this file */
00252 
00253 #define USE_MALLOC_LOCK
00254 #define NEEDED
00255 
00256 /* If USE_MALLOC_LOCK, use supplied critical-section-based lock functions */
00257 #ifdef USE_MALLOC_LOCK
00258 static int slwait(int *sl);
00259 static int slrelease(int *sl);
00260 #endif
00261 
00262 static long getpagesize(void);
00263 static long getregionsize(void);
00264 /* static */ void *sbrk(long size);
00265 static void *mmap(void *ptr, long size, long prot, long type, long handle, long arg);
00266 static long munmap(void *ptr, long size);
00267 
00268 static void vminfo (unsigned long *free, unsigned long *reserved, unsigned long *committed);
00269 static int cpuinfo (int whole, unsigned long *kernel, unsigned long *user);
00270 
00271 #endif
00272 
00273 /*
00274   __STD_C should be nonzero if using ANSI-standard C compiler, a C++
00275   compiler, or a C compiler sufficiently close to ANSI to get away
00276   with it.
00277 */
00278 
00279 #ifndef __STD_C
00280 #if defined(__STDC__) || defined(_cplusplus)
00281 #define __STD_C     1
00282 #else
00283 #define __STD_C     0
00284 #endif 
00285 #endif /*__STD_C*/
00286 
00287 
00288 /*
00289   Void_t* is the pointer type that malloc should say it returns
00290 */
00291 
00292 #ifndef Void_t
00293 #if (__STD_C || defined(WIN32))
00294 #define Void_t      void
00295 #else
00296 #define Void_t      char
00297 #endif
00298 #endif /*Void_t*/
00299 
00300 #if __STD_C
00301 #include <stddef.h>   /* for size_t */
00302 #else
00303 #include <sys/types.h>
00304 #endif
00305 
00306 #ifdef __cplusplus
00307 extern "C" {
00308 #endif
00309 
00310 /* define LACKS_UNISTD_H if your system does not have a <unistd.h>. */
00311 
00312 /* #define  LACKS_UNISTD_H */
00313 
00314 #ifndef LACKS_UNISTD_H
00315 #include <unistd.h>
00316 #endif
00317 
00318 /* define LACKS_SYS_PARAM_H if your system does not have a <sys/param.h>. */
00319 
00320 /* #define  LACKS_SYS_PARAM_H */
00321 
00322 
00323 #include <stdio.h>    /* needed for malloc_stats */
00324 #include <errno.h>    /* needed for optional MALLOC_FAILURE_ACTION */
00325 
00326 
00327 /*
00328   Debugging:
00329 
00330   Because freed chunks may be overwritten with bookkeeping fields, this
00331   malloc will often die when freed memory is overwritten by user
00332   programs.  This can be very effective (albeit in an annoying way)
00333   in helping track down dangling pointers.
00334 
00335   If you compile with -DDEBUG, a number of assertion checks are
00336   enabled that will catch more memory errors. You probably won't be
00337   able to make much sense of the actual assertion errors, but they
00338   should help you locate incorrectly overwritten memory.  The
00339   checking is fairly extensive, and will slow down execution
00340   noticeably. Calling malloc_stats or mallinfo with DEBUG set will
00341   attempt to check every non-mmapped allocated and free chunk in the
00342   course of computing the summmaries. (By nature, mmapped regions
00343   cannot be checked very much automatically.)
00344 
00345   Setting DEBUG may also be helpful if you are trying to modify
00346   this code. The assertions in the check routines spell out in more
00347   detail the assumptions and invariants underlying the algorithms.
00348 
00349   Setting DEBUG does NOT provide an automated mechanism for checking
00350   that all accesses to malloced memory stay within their
00351   bounds. However, there are several add-ons and adaptations of this
00352   or other mallocs available that do this.
00353 */
00354 
00355 #if DEBUG
00356 #include <assert.h>
00357 #else
00358 #define assert(x) ((void)0)
00359 #endif
00360 
00361 
00362 /*
00363   INTERNAL_SIZE_T is the word-size used for internal bookkeeping
00364   of chunk sizes.
00365 
00366   The default version is the same as size_t.
00367 
00368   While not strictly necessary, it is best to define this as an
00369   unsigned type, even if size_t is a signed type. This may avoid some
00370   artificial size limitations on some systems.
00371 
00372   On a 64-bit machine, you may be able to reduce malloc overhead by
00373   defining INTERNAL_SIZE_T to be a 32 bit `unsigned int' at the
00374   expense of not being able to handle more than 2^32 of malloced
00375   space. If this limitation is acceptable, you are encouraged to set
00376   this unless you are on a platform requiring 16byte alignments. In
00377   this case the alignment requirements turn out to negate any
00378   potential advantages of decreasing size_t word size.
00379 
00380   Implementors: Beware of the possible combinations of:
00381      - INTERNAL_SIZE_T might be signed or unsigned, might be 32 or 64 bits,
00382        and might be the same width as int or as long
00383      - size_t might have different width and signedness as INTERNAL_SIZE_T
00384      - int and long might be 32 or 64 bits, and might be the same width
00385   To deal with this, most comparisons and difference computations
00386   among INTERNAL_SIZE_Ts should cast them to unsigned long, being
00387   aware of the fact that casting an unsigned int to a wider long does
00388   not sign-extend. (This also makes checking for negative numbers
00389   awkward.) Some of these casts result in harmless compiler warnings
00390   on some systems.
00391 */
00392 
00393 #ifndef INTERNAL_SIZE_T
00394 #define INTERNAL_SIZE_T size_t
00395 #endif
00396 
00397 /* The corresponding word size */
00398 #define SIZE_SZ                (sizeof(INTERNAL_SIZE_T))
00399 
00400 
00401 /*
00402   MALLOC_ALIGNMENT is the minimum alignment for malloc'ed chunks.
00403   It must be a power of two at least 2 * SIZE_SZ, even on machines
00404   for which smaller alignments would suffice. It may be defined as
00405   larger than this though. Note however that code and data structures
00406   are optimized for the case of 8-byte alignment.
00407 */
00408 
00409 
00410 #ifndef MALLOC_ALIGNMENT
00411 #define MALLOC_ALIGNMENT       (2 * SIZE_SZ)
00412 #endif
00413 
00414 /* The corresponding bit mask value */
00415 #define MALLOC_ALIGN_MASK      (MALLOC_ALIGNMENT - 1)
00416 
00417 
00418 
00419 /*
00420   REALLOC_ZERO_BYTES_FREES should be set if a call to
00421   realloc with zero bytes should be the same as a call to free.
00422   Some people think it should. Otherwise, since this malloc
00423   returns a unique pointer for malloc(0), so does realloc(p, 0).
00424 */
00425 
00426 /*   #define REALLOC_ZERO_BYTES_FREES */
00427 
00428 /*
00429   TRIM_FASTBINS controls whether free() of a very small chunk can
00430   immediately lead to trimming. Setting to true (1) can reduce memory
00431   footprint, but will almost always slow down programs that use a lot
00432   of small chunks.
00433 
00434   Define this only if you are willing to give up some speed to more
00435   aggressively reduce system-level memory footprint when releasing
00436   memory in programs that use many small chunks.  You can get
00437   essentially the same effect by setting MXFAST to 0, but this can
00438   lead to even greater slowdowns in programs using many small chunks.
00439   TRIM_FASTBINS is an in-between compile-time option, that disables
00440   only those chunks bordering topmost memory from being placed in
00441   fastbins.
00442 */
00443 
00444 #ifndef TRIM_FASTBINS
00445 #define TRIM_FASTBINS  0
00446 #endif
00447 
00448 
00449 /*
00450   USE_DL_PREFIX will prefix all public routines with the string 'dl'.
00451   This is necessary when you only want to use this malloc in one part 
00452   of a program, using your regular system malloc elsewhere.
00453 */
00454 
00455 #define USE_DL_PREFIX
00456 
00457 
00458 /*
00459   USE_MALLOC_LOCK causes wrapper functions to surround each
00460   callable routine with pthread mutex lock/unlock.
00461 
00462   USE_MALLOC_LOCK forces USE_PUBLIC_MALLOC_WRAPPERS to be defined
00463 */
00464 
00465 
00466 /* #define USE_MALLOC_LOCK */
00467 
00468 
00469 /*
00470   If USE_PUBLIC_MALLOC_WRAPPERS is defined, every public routine is
00471   actually a wrapper function that first calls MALLOC_PREACTION, then
00472   calls the internal routine, and follows it with
00473   MALLOC_POSTACTION. This is needed for locking, but you can also use
00474   this, without USE_MALLOC_LOCK, for purposes of interception,
00475   instrumentation, etc. It is a sad fact that using wrappers often
00476   noticeably degrades performance of malloc-intensive programs.
00477 */
00478 
00479 #ifdef USE_MALLOC_LOCK
00480 #define USE_PUBLIC_MALLOC_WRAPPERS
00481 #else
00482 /* #define USE_PUBLIC_MALLOC_WRAPPERS */
00483 #endif
00484 
00485 
00486 /* 
00487    Two-phase name translation.
00488    All of the actual routines are given mangled names.
00489    When wrappers are used, they become the public callable versions.
00490    When DL_PREFIX is used, the callable names are prefixed.
00491 */
00492 
00493 #ifndef USE_PUBLIC_MALLOC_WRAPPERS
00494 #define cALLOc      public_cALLOc
00495 #define fREe        public_fREe
00496 #define cFREe       public_cFREe
00497 #define mALLOc      public_mALLOc
00498 #define mEMALIGn    public_mEMALIGn
00499 #define rEALLOc     public_rEALLOc
00500 #define vALLOc      public_vALLOc
00501 #define pVALLOc     public_pVALLOc
00502 #define mALLINFo    public_mALLINFo
00503 #define mALLOPt     public_mALLOPt
00504 #define mTRIm       public_mTRIm
00505 #define mSTATs      public_mSTATs
00506 #define mUSABLe     public_mUSABLe
00507 #define iCALLOc     public_iCALLOc
00508 #define iCOMALLOc   public_iCOMALLOc
00509 #endif
00510 
00511 #ifdef USE_DL_PREFIX
00512 #define public_cALLOc    dlcalloc
00513 #define public_fREe      dlfree
00514 #define public_cFREe     dlcfree
00515 #define public_mALLOc    dlmalloc
00516 #define public_mEMALIGn  dlmemalign
00517 #define public_rEALLOc   dlrealloc
00518 #define public_vALLOc    dlvalloc
00519 #define public_pVALLOc   dlpvalloc
00520 #define public_mALLINFo  dlmallinfo
00521 #define public_mALLOPt   dlmallopt
00522 #define public_mTRIm     dlmalloc_trim
00523 #define public_mSTATs    dlmalloc_stats
00524 #define public_mUSABLe   dlmalloc_usable_size
00525 #define public_iCALLOc   dlindependent_calloc
00526 #define public_iCOMALLOc dlindependent_comalloc
00527 #else /* USE_DL_PREFIX */
00528 #define public_cALLOc    calloc
00529 #define public_fREe      free
00530 #define public_cFREe     cfree
00531 #define public_mALLOc    malloc
00532 #define public_mEMALIGn  memalign
00533 #define public_rEALLOc   realloc
00534 #define public_vALLOc    valloc
00535 #define public_pVALLOc   pvalloc
00536 #define public_mALLINFo  mallinfo
00537 #define public_mALLOPt   mallopt
00538 #define public_mTRIm     malloc_trim
00539 #define public_mSTATs    malloc_stats
00540 #define public_mUSABLe   malloc_usable_size
00541 #define public_iCALLOc   independent_calloc
00542 #define public_iCOMALLOc independent_comalloc
00543 #endif /* USE_DL_PREFIX */
00544 
00545 
00546 /*
00547   HAVE_MEMCPY should be defined if you are not otherwise using
00548   ANSI STD C, but still have memcpy and memset in your C library
00549   and want to use them in calloc and realloc. Otherwise simple
00550   macro versions are defined below.
00551 
00552   USE_MEMCPY should be defined as 1 if you actually want to
00553   have memset and memcpy called. People report that the macro
00554   versions are faster than libc versions on some systems.
00555   
00556   Even if USE_MEMCPY is set to 1, loops to copy/clear small chunks
00557   (of <= 36 bytes) are manually unrolled in realloc and calloc.
00558 */
00559 
00560 /* #define HAVE_MEMCPY */
00561 
00562 #ifndef USE_MEMCPY
00563 #ifdef HAVE_MEMCPY
00564 #define USE_MEMCPY 1
00565 #else
00566 #define USE_MEMCPY 0
00567 #endif
00568 #endif
00569 
00570 
00571 #if (__STD_C || defined(HAVE_MEMCPY))
00572 
00573 #ifdef WIN32
00574 /* On Win32 memset and memcpy are already declared in windows.h */
00575 #else
00576 #if __STD_C
00577 void* memset(void*, int, size_t);
00578 void* memcpy(void*, const void*, size_t);
00579 #else
00580 Void_t* memset();
00581 Void_t* memcpy();
00582 #endif
00583 #endif
00584 #endif
00585 
00586 /*
00587   MALLOC_FAILURE_ACTION is the action to take before "return 0" when
00588   malloc fails to be able to return memory, either because memory is
00589   exhausted or because of illegal arguments.
00590   
00591   By default, sets errno if running on STD_C platform, else does nothing.  
00592 */
00593 
00594 #ifndef MALLOC_FAILURE_ACTION
00595 #if __STD_C
00596 #define MALLOC_FAILURE_ACTION \
00597    errno = ENOMEM;
00598 
00599 #else
00600 #define MALLOC_FAILURE_ACTION
00601 #endif
00602 #endif
00603 
00604 /*
00605   MORECORE-related declarations. By default, rely on sbrk
00606 */
00607 
00608 
00609 #ifdef LACKS_UNISTD_H
00610 #if !defined(__FreeBSD__) && !defined(__OpenBSD__) && !defined(__NetBSD__)
00611 #if __STD_C
00612 extern Void_t*     sbrk(ptrdiff_t);
00613 #else
00614 extern Void_t*     sbrk();
00615 #endif
00616 #endif
00617 #endif
00618 
00619 /*
00620   MORECORE is the name of the routine to call to obtain more memory
00621   from the system.  See below for general guidance on writing
00622   alternative MORECORE functions, as well as a version for WIN32 and a
00623   sample version for pre-OSX macos.
00624 */
00625 
00626 #ifndef MORECORE
00627 #define MORECORE sbrk
00628 #endif
00629 
00630 /*
00631   MORECORE_FAILURE is the value returned upon failure of MORECORE
00632   as well as mmap. Since it cannot be an otherwise valid memory address,
00633   and must reflect values of standard sys calls, you probably ought not
00634   try to redefine it.
00635 */
00636 
00637 #ifndef MORECORE_FAILURE
00638 #define MORECORE_FAILURE (-1)
00639 #endif
00640 
00641 /*
00642   If MORECORE_CONTIGUOUS is true, take advantage of fact that
00643   consecutive calls to MORECORE with positive arguments always return
00644   contiguous increasing addresses.  This is true of unix sbrk.  Even
00645   if not defined, when regions happen to be contiguous, malloc will
00646   permit allocations spanning regions obtained from different
00647   calls. But defining this when applicable enables some stronger
00648   consistency checks and space efficiencies.
00649 */
00650 
00651 #ifndef MORECORE_CONTIGUOUS
00652 #define MORECORE_CONTIGUOUS 1
00653 #endif
00654 
00655 /*
00656   Define MORECORE_CANNOT_TRIM if your version of MORECORE
00657   cannot release space back to the system when given negative
00658   arguments. This is generally necessary only if you are using
00659   a hand-crafted MORECORE function that cannot handle negative arguments.
00660 */
00661 
00662 /* #define MORECORE_CANNOT_TRIM */
00663 
00664 
00665 /*
00666   Define HAVE_MMAP as true to optionally make malloc() use mmap() to
00667   allocate very large blocks.  These will be returned to the
00668   operating system immediately after a free(). Also, if mmap
00669   is available, it is used as a backup strategy in cases where
00670   MORECORE fails to provide space from system.
00671 
00672   This malloc is best tuned to work with mmap for large requests.
00673   If you do not have mmap, operations involving very large chunks (1MB
00674   or so) may be slower than you'd like.
00675 */
00676 
00677 #ifndef HAVE_MMAP
00678 #define HAVE_MMAP 1
00679 
00680 /* 
00681    Standard unix mmap using /dev/zero clears memory so calloc doesn't
00682    need to.
00683 */
00684 
00685 #ifndef MMAP_CLEARS
00686 #define MMAP_CLEARS 1
00687 #endif
00688 
00689 #else /* no mmap */
00690 #ifndef MMAP_CLEARS
00691 #define MMAP_CLEARS 0
00692 #endif
00693 #endif
00694 
00695 
00696 /* 
00697    MMAP_AS_MORECORE_SIZE is the minimum mmap size argument to use if
00698    sbrk fails, and mmap is used as a backup (which is done only if
00699    HAVE_MMAP).  The value must be a multiple of page size.  This
00700    backup strategy generally applies only when systems have "holes" in
00701    address space, so sbrk cannot perform contiguous expansion, but
00702    there is still space available on system.  On systems for which
00703    this is known to be useful (i.e. most linux kernels), this occurs
00704    only when programs allocate huge amounts of memory.  Between this,
00705    and the fact that mmap regions tend to be limited, the size should
00706    be large, to avoid too many mmap calls and thus avoid running out
00707    of kernel resources.
00708 */
00709 
00710 #ifndef MMAP_AS_MORECORE_SIZE
00711 #define MMAP_AS_MORECORE_SIZE (1024 * 1024)
00712 #endif
00713 
00714 /*
00715   Define HAVE_MREMAP to make realloc() use mremap() to re-allocate
00716   large blocks.  This is currently only possible on Linux with
00717   kernel versions newer than 1.3.77.
00718 */
00719 
00720 #ifndef HAVE_MREMAP
00721 #ifdef linux
00722 #define HAVE_MREMAP 1
00723 #else
00724 #define HAVE_MREMAP 0
00725 #endif
00726 
00727 #endif /* HAVE_MMAP */
00728 
00729 
00730 /*
00731   The system page size. To the extent possible, this malloc manages
00732   memory from the system in page-size units.  Note that this value is
00733   cached during initialization into a field of malloc_state. So even
00734   if malloc_getpagesize is a function, it is only called once.
00735 
00736   The following mechanics for getpagesize were adapted from bsd/gnu
00737   getpagesize.h. If none of the system-probes here apply, a value of
00738   4096 is used, which should be OK: If they don't apply, then using
00739   the actual value probably doesn't impact performance.
00740 */
00741 
00742 
00743 #ifndef malloc_getpagesize
00744 
00745 #ifndef LACKS_UNISTD_H
00746 #  include <unistd.h>
00747 #endif
00748 
00749 #  ifdef _SC_PAGESIZE         /* some SVR4 systems omit an underscore */
00750 #    ifndef _SC_PAGE_SIZE
00751 #      define _SC_PAGE_SIZE _SC_PAGESIZE
00752 #    endif
00753 #  endif
00754 
00755 #  ifdef _SC_PAGE_SIZE
00756 #    define malloc_getpagesize sysconf(_SC_PAGE_SIZE)
00757 #  else
00758 #    if defined(BSD) || defined(DGUX) || defined(HAVE_GETPAGESIZE)
00759        extern size_t getpagesize();
00760 #      define malloc_getpagesize getpagesize()
00761 #    else
00762 #      ifdef WIN32 /* use supplied emulation of getpagesize */
00763 #        define malloc_getpagesize getpagesize() 
00764 #      else
00765 #        ifndef LACKS_SYS_PARAM_H
00766 #          include <sys/param.h>
00767 #        endif
00768 #        ifdef EXEC_PAGESIZE
00769 #          define malloc_getpagesize EXEC_PAGESIZE
00770 #        else
00771 #          ifdef NBPG
00772 #            ifndef CLSIZE
00773 #              define malloc_getpagesize NBPG
00774 #            else
00775 #              define malloc_getpagesize (NBPG * CLSIZE)
00776 #            endif
00777 #          else
00778 #            ifdef NBPC
00779 #              define malloc_getpagesize NBPC
00780 #            else
00781 #              ifdef PAGESIZE
00782 #                define malloc_getpagesize PAGESIZE
00783 #              else /* just guess */
00784 #                define malloc_getpagesize (4096) 
00785 #              endif
00786 #            endif
00787 #          endif
00788 #        endif
00789 #      endif
00790 #    endif
00791 #  endif
00792 #endif
00793 
00794 /*
00795   This version of malloc supports the standard SVID/XPG mallinfo
00796   routine that returns a struct containing usage properties and
00797   statistics. It should work on any SVID/XPG compliant system that has
00798   a /usr/include/malloc.h defining struct mallinfo. (If you'd like to
00799   install such a thing yourself, cut out the preliminary declarations
00800   as described above and below and save them in a malloc.h file. But
00801   there's no compelling reason to bother to do this.)
00802 
00803   The main declaration needed is the mallinfo struct that is returned
00804   (by-copy) by mallinfo().  The SVID/XPG malloinfo struct contains a
00805   bunch of field that are not even meaningful in this version of
00806   malloc.  These fields are are instead filled by mallinfo() with
00807   other numbers that might be of interest.
00808 
00809   HAVE_USR_INCLUDE_MALLOC_H should be set if you have a
00810   /usr/include/malloc.h file that includes a declaration of struct
00811   mallinfo.  If so, it is included; else an SVID2/XPG2 compliant
00812   version is declared below.  These must be precisely the same for
00813   mallinfo() to work.  The original SVID version of this struct,
00814   defined on most systems with mallinfo, declares all fields as
00815   ints. But some others define as unsigned long. If your system
00816   defines the fields using a type of different width than listed here,
00817   you must #include your system version and #define
00818   HAVE_USR_INCLUDE_MALLOC_H.
00819 */
00820 
00821 /* #define HAVE_USR_INCLUDE_MALLOC_H */
00822 
00823 #ifdef HAVE_USR_INCLUDE_MALLOC_H
00824 #include "/usr/include/malloc.h"
00825 #else
00826 
00827 /* SVID2/XPG mallinfo structure */
00828 
00829 struct mallinfo {
00830   int arena;    /* non-mmapped space allocated from system */
00831   int ordblks;  /* number of free chunks */
00832   int smblks;   /* number of fastbin blocks */
00833   int hblks;    /* number of mmapped regions */
00834   int hblkhd;   /* space in mmapped regions */
00835   int usmblks;  /* maximum total allocated space */
00836   int fsmblks;  /* space available in freed fastbin blocks */
00837   int uordblks; /* total allocated space */
00838   int fordblks; /* total free space */
00839   int keepcost; /* top-most, releasable (via malloc_trim) space */
00840 };
00841 
00842 /*
00843   SVID/XPG defines four standard parameter numbers for mallopt,
00844   normally defined in malloc.h.  Only one of these (M_MXFAST) is used
00845   in this malloc. The others (M_NLBLKS, M_GRAIN, M_KEEP) don't apply,
00846   so setting them has no effect. But this malloc also supports other
00847   options in mallopt described below.
00848 */
00849 #endif
00850 
00851 
00852 /* ---------- description of public routines ------------ */
00853 
00854 /*
00855   malloc(size_t n)
00856   Returns a pointer to a newly allocated chunk of at least n bytes, or null
00857   if no space is available. Additionally, on failure, errno is
00858   set to ENOMEM on ANSI C systems.
00859 
00860   If n is zero, malloc returns a minumum-sized chunk. (The minimum
00861   size is 16 bytes on most 32bit systems, and 24 or 32 bytes on 64bit
00862   systems.)  On most systems, size_t is an unsigned type, so calls
00863   with negative arguments are interpreted as requests for huge amounts
00864   of space, which will often fail. The maximum supported value of n
00865   differs across systems, but is in all cases less than the maximum
00866   representable value of a size_t.
00867 */
00868 #if __STD_C
00869 Void_t*  public_mALLOc(size_t);
00870 #else
00871 Void_t*  public_mALLOc();
00872 #endif
00873 
00874 /*
00875   free(Void_t* p)
00876   Releases the chunk of memory pointed to by p, that had been previously
00877   allocated using malloc or a related routine such as realloc.
00878   It has no effect if p is null. It can have arbitrary (i.e., bad!)
00879   effects if p has already been freed.
00880 
00881   Unless disabled (using mallopt), freeing very large spaces will
00882   when possible, automatically trigger operations that give
00883   back unused memory to the system, thus reducing program footprint.
00884 */
00885 #if __STD_C
00886 void     public_fREe(Void_t*);
00887 #else
00888 void     public_fREe();
00889 #endif
00890 
00891 /*
00892   calloc(size_t n_elements, size_t element_size);
00893   Returns a pointer to n_elements * element_size bytes, with all locations
00894   set to zero.
00895 */
00896 #if __STD_C
00897 Void_t*  public_cALLOc(size_t, size_t);
00898 #else
00899 Void_t*  public_cALLOc();
00900 #endif
00901 
00902 /*
00903   realloc(Void_t* p, size_t n)
00904   Returns a pointer to a chunk of size n that contains the same data
00905   as does chunk p up to the minimum of (n, p's size) bytes, or null
00906   if no space is available. 
00907 
00908   The returned pointer may or may not be the same as p. The algorithm
00909   prefers extending p when possible, otherwise it employs the
00910   equivalent of a malloc-copy-free sequence.
00911 
00912   If p is null, realloc is equivalent to malloc.  
00913 
00914   If space is not available, realloc returns null, errno is set (if on
00915   ANSI) and p is NOT freed.
00916 
00917   if n is for fewer bytes than already held by p, the newly unused
00918   space is lopped off and freed if possible.  Unless the #define
00919   REALLOC_ZERO_BYTES_FREES is set, realloc with a size argument of
00920   zero (re)allocates a minimum-sized chunk.
00921 
00922   Large chunks that were internally obtained via mmap will always
00923   be reallocated using malloc-copy-free sequences unless
00924   the system supports MREMAP (currently only linux).
00925 
00926   The old unix realloc convention of allowing the last-free'd chunk
00927   to be used as an argument to realloc is not supported.
00928 */
00929 #if __STD_C
00930 Void_t*  public_rEALLOc(Void_t*, size_t);
00931 #else
00932 Void_t*  public_rEALLOc();
00933 #endif
00934 
00935 /*
00936   memalign(size_t alignment, size_t n);
00937   Returns a pointer to a newly allocated chunk of n bytes, aligned
00938   in accord with the alignment argument.
00939 
00940   The alignment argument should be a power of two. If the argument is
00941   not a power of two, the nearest greater power is used.
00942   8-byte alignment is guaranteed by normal malloc calls, so don't
00943   bother calling memalign with an argument of 8 or less.
00944 
00945   Overreliance on memalign is a sure way to fragment space.
00946 */
00947 #if __STD_C
00948 Void_t*  public_mEMALIGn(size_t, size_t);
00949 #else
00950 Void_t*  public_mEMALIGn();
00951 #endif
00952 
00953 /*
00954   valloc(size_t n);
00955   Equivalent to memalign(pagesize, n), where pagesize is the page
00956   size of the system. If the pagesize is unknown, 4096 is used.
00957 */
00958 #if __STD_C
00959 Void_t*  public_vALLOc(size_t);
00960 #else
00961 Void_t*  public_vALLOc();
00962 #endif
00963 
00964 
00965 
00966 /*
00967   mallopt(int parameter_number, int parameter_value)
00968   Sets tunable parameters The format is to provide a
00969   (parameter-number, parameter-value) pair.  mallopt then sets the
00970   corresponding parameter to the argument value if it can (i.e., so
00971   long as the value is meaningful), and returns 1 if successful else
00972   0.  SVID/XPG/ANSI defines four standard param numbers for mallopt,
00973   normally defined in malloc.h.  Only one of these (M_MXFAST) is used
00974   in this malloc. The others (M_NLBLKS, M_GRAIN, M_KEEP) don't apply,
00975   so setting them has no effect. But this malloc also supports four
00976   other options in mallopt. See below for details.  Briefly, supported
00977   parameters are as follows (listed defaults are for "typical"
00978   configurations).
00979 
00980   Symbol            param #   default    allowed param values
00981   M_MXFAST          1         64         0-80  (0 disables fastbins)
00982   M_TRIM_THRESHOLD -1         128*1024   any   (-1U disables trimming)
00983   M_TOP_PAD        -2         0          any  
00984   M_MMAP_THRESHOLD -3         128*1024   any   (or 0 if no MMAP support)
00985   M_MMAP_MAX       -4         65536      any   (0 disables use of mmap)
00986 */
00987 #if __STD_C
00988 int      public_mALLOPt(int, int);
00989 #else
00990 int      public_mALLOPt();
00991 #endif
00992 
00993 
00994 /*
00995   mallinfo()
00996   Returns (by copy) a struct containing various summary statistics:
00997 
00998   arena:     current total non-mmapped bytes allocated from system 
00999   ordblks:   the number of free chunks 
01000   smblks:    the number of fastbin blocks (i.e., small chunks that
01001                have been freed but not use resused or consolidated)
01002   hblks:     current number of mmapped regions 
01003   hblkhd:    total bytes held in mmapped regions 
01004   usmblks:   the maximum total allocated space. This will be greater
01005                 than current total if trimming has occurred.
01006   fsmblks:   total bytes held in fastbin blocks 
01007   uordblks:  current total allocated space (normal or mmapped)
01008   fordblks:  total free space 
01009   keepcost:  the maximum number of bytes that could ideally be released
01010                back to system via malloc_trim. ("ideally" means that
01011                it ignores page restrictions etc.)
01012 
01013   Because these fields are ints, but internal bookkeeping may
01014   be kept as longs, the reported values may wrap around zero and 
01015   thus be inaccurate.
01016 */
01017 #if __STD_C
01018 struct mallinfo public_mALLINFo(void);
01019 #else
01020 struct mallinfo public_mALLINFo();
01021 #endif
01022 
01023 /*
01024   independent_calloc(size_t n_elements, size_t element_size, Void_t* chunks[]);
01025 
01026   independent_calloc is similar to calloc, but instead of returning a
01027   single cleared space, it returns an array of pointers to n_elements
01028   independent elements that can hold contents of size elem_size, each
01029   of which starts out cleared, and can be independently freed,
01030   realloc'ed etc. The elements are guaranteed to be adjacently
01031   allocated (this is not guaranteed to occur with multiple callocs or
01032   mallocs), which may also improve cache locality in some
01033   applications.
01034 
01035   The "chunks" argument is optional (i.e., may be null, which is
01036   probably the most typical usage). If it is null, the returned array
01037   is itself dynamically allocated and should also be freed when it is
01038   no longer needed. Otherwise, the chunks array must be of at least
01039   n_elements in length. It is filled in with the pointers to the
01040   chunks.
01041 
01042   In either case, independent_calloc returns this pointer array, or
01043   null if the allocation failed.  If n_elements is zero and "chunks"
01044   is null, it returns a chunk representing an array with zero elements
01045   (which should be freed if not wanted).
01046 
01047   Each element must be individually freed when it is no longer
01048   needed. If you'd like to instead be able to free all at once, you
01049   should instead use regular calloc and assign pointers into this
01050   space to represent elements.  (In this case though, you cannot
01051   independently free elements.)
01052   
01053   independent_calloc simplifies and speeds up implementations of many
01054   kinds of pools.  It may also be useful when constructing large data
01055   structures that initially have a fixed number of fixed-sized nodes,
01056   but the number is not known at compile time, and some of the nodes
01057   may later need to be freed. For example:
01058 
01059   struct Node { int item; struct Node* next; };
01060   
01061   struct Node* build_list() {
01062     struct Node** pool;
01063     int n = read_number_of_nodes_needed();
01064     if (n <= 0) return 0;
01065     pool = (struct Node**)(independent_calloc(n, sizeof(struct Node), 0);
01066     if (pool == 0) die(); 
01067     // organize into a linked list... 
01068     struct Node* first = pool[0];
01069     for (i = 0; i < n-1; ++i) 
01070       pool[i]->next = pool[i+1];
01071     free(pool);     // Can now free the array (or not, if it is needed later)
01072     return first;
01073   }
01074 */
01075 #if __STD_C
01076 Void_t** public_iCALLOc(size_t, size_t, Void_t**);
01077 #else
01078 Void_t** public_iCALLOc();
01079 #endif
01080 
01081 /*
01082   independent_comalloc(size_t n_elements, size_t sizes[], Void_t* chunks[]);
01083 
01084   independent_comalloc allocates, all at once, a set of n_elements
01085   chunks with sizes indicated in the "sizes" array.    It returns
01086   an array of pointers to these elements, each of which can be
01087   independently freed, realloc'ed etc. The elements are guaranteed to
01088   be adjacently allocated (this is not guaranteed to occur with
01089   multiple callocs or mallocs), which may also improve cache locality
01090   in some applications.
01091 
01092   The "chunks" argument is optional (i.e., may be null). If it is null
01093   the returned array is itself dynamically allocated and should also
01094   be freed when it is no longer needed. Otherwise, the chunks array
01095   must be of at least n_elements in length. It is filled in with the
01096   pointers to the chunks.
01097 
01098   In either case, independent_comalloc returns this pointer array, or
01099   null if the allocation failed.  If n_elements is zero and chunks is
01100   null, it returns a chunk representing an array with zero elements
01101   (which should be freed if not wanted).
01102   
01103   Each element must be individually freed when it is no longer
01104   needed. If you'd like to instead be able to free all at once, you
01105   should instead use a single regular malloc, and assign pointers at
01106   particular offsets in the aggregate space. (In this case though, you 
01107   cannot independently free elements.)
01108 
01109   independent_comallac differs from independent_calloc in that each
01110   element may have a different size, and also that it does not
01111   automatically clear elements.
01112 
01113   independent_comalloc can be used to speed up allocation in cases
01114   where several structs or objects must always be allocated at the
01115   same time.  For example:
01116 
01117   struct Head { ... }
01118   struct Foot { ... }
01119 
01120   void send_message(char* msg) {
01121     int msglen = strlen(msg);
01122     size_t sizes[3] = { sizeof(struct Head), msglen, sizeof(struct Foot) };
01123     void* chunks[3];
01124     if (independent_comalloc(3, sizes, chunks) == 0)
01125       die();
01126     struct Head* head = (struct Head*)(chunks[0]);
01127     char*        body = (char*)(chunks[1]);
01128     struct Foot* foot = (struct Foot*)(chunks[2]);
01129     // ...
01130   }
01131 
01132   In general though, independent_comalloc is worth using only for
01133   larger values of n_elements. For small values, you probably won't
01134   detect enough difference from series of malloc calls to bother.
01135 
01136   Overuse of independent_comalloc can increase overall memory usage,
01137   since it cannot reuse existing noncontiguous small chunks that
01138   might be available for some of the elements.
01139 */
01140 #if __STD_C
01141 Void_t** public_iCOMALLOc(size_t, size_t*, Void_t**);
01142 #else
01143 Void_t** public_iCOMALLOc();
01144 #endif
01145 
01146 
01147 /*
01148   pvalloc(size_t n);
01149   Equivalent to valloc(minimum-page-that-holds(n)), that is,
01150   round up n to nearest pagesize.
01151  */
01152 #if __STD_C
01153 Void_t*  public_pVALLOc(size_t);
01154 #else
01155 Void_t*  public_pVALLOc();
01156 #endif
01157 
01158 /*
01159   cfree(Void_t* p);
01160   Equivalent to free(p).
01161 
01162   cfree is needed/defined on some systems that pair it with calloc,
01163   for odd historical reasons (such as: cfree is used in example 
01164   code in the first edition of K&R).
01165 */
01166 #if __STD_C
01167 void     public_cFREe(Void_t*);
01168 #else
01169 void     public_cFREe();
01170 #endif
01171 
01172 /*
01173   malloc_trim(size_t pad);
01174 
01175   If possible, gives memory back to the system (via negative
01176   arguments to sbrk) if there is unused memory at the `high' end of
01177   the malloc pool. You can call this after freeing large blocks of
01178   memory to potentially reduce the system-level memory requirements
01179   of a program. However, it cannot guarantee to reduce memory. Under
01180   some allocation patterns, some large free blocks of memory will be
01181   locked between two used chunks, so they cannot be given back to
01182   the system.
01183   
01184   The `pad' argument to malloc_trim represents the amount of free
01185   trailing space to leave untrimmed. If this argument is zero,
01186   only the minimum amount of memory to maintain internal data
01187   structures will be left (one page or less). Non-zero arguments
01188   can be supplied to maintain enough trailing space to service
01189   future expected allocations without having to re-obtain memory
01190   from the system.
01191   
01192   Malloc_trim returns 1 if it actually released any memory, else 0.
01193   On systems that do not support "negative sbrks", it will always
01194   rreturn 0.
01195 */
01196 #if __STD_C
01197 int      public_mTRIm(size_t);
01198 #else
01199 int      public_mTRIm();
01200 #endif
01201 
01202 /*
01203   malloc_usable_size(Void_t* p);
01204 
01205   Returns the number of bytes you can actually use in
01206   an allocated chunk, which may be more than you requested (although
01207   often not) due to alignment and minimum size constraints.
01208   You can use this many bytes without worrying about
01209   overwriting other allocated objects. This is not a particularly great
01210   programming practice. malloc_usable_size can be more useful in
01211   debugging and assertions, for example:
01212 
01213   p = malloc(n);
01214   assert(malloc_usable_size(p) >= 256);
01215 
01216 */
01217 #if __STD_C
01218 size_t   public_mUSABLe(Void_t*);
01219 #else
01220 size_t   public_mUSABLe();
01221 #endif
01222 
01223 /*
01224   malloc_stats();
01225   Prints on stderr the amount of space obtained from the system (both
01226   via sbrk and mmap), the maximum amount (which may be more than
01227   current if malloc_trim and/or munmap got called), and the current
01228   number of bytes allocated via malloc (or realloc, etc) but not yet
01229   freed. Note that this is the number of bytes allocated, not the
01230   number requested. It will be larger than the number requested
01231   because of alignment and bookkeeping overhead. Because it includes
01232   alignment wastage as being in use, this figure may be greater than
01233   zero even when no user-level chunks are allocated.
01234 
01235   The reported current and maximum system memory can be inaccurate if
01236   a program makes other calls to system memory allocation functions
01237   (normally sbrk) outside of malloc.
01238 
01239   malloc_stats prints only the most commonly interesting statistics.
01240   More information can be obtained by calling mallinfo.
01241 
01242 */
01243 #if __STD_C
01244 void     public_mSTATs();
01245 #else
01246 void     public_mSTATs();
01247 #endif
01248 
01249 /* mallopt tuning options */
01250 
01251 /*
01252   M_MXFAST is the maximum request size used for "fastbins", special bins
01253   that hold returned chunks without consolidating their spaces. This
01254   enables future requests for chunks of the same size to be handled
01255   very quickly, but can increase fragmentation, and thus increase the
01256   overall memory footprint of a program.
01257 
01258   This malloc manages fastbins very conservatively yet still
01259   efficiently, so fragmentation is rarely a problem for values less
01260   than or equal to the default.  The maximum supported value of MXFAST
01261   is 80. You wouldn't want it any higher than this anyway.  Fastbins
01262   are designed especially for use with many small structs, objects or
01263   strings -- the default handles structs/objects/arrays with sizes up
01264   to 8 4byte fields, or small strings representing words, tokens,
01265   etc. Using fastbins for larger objects normally worsens
01266   fragmentation without improving speed.
01267 
01268   M_MXFAST is set in REQUEST size units. It is internally used in
01269   chunksize units, which adds padding and alignment.  You can reduce
01270   M_MXFAST to 0 to disable all use of fastbins.  This causes the malloc
01271   algorithm to be a closer approximation of fifo-best-fit in all cases,
01272   not just for larger requests, but will generally cause it to be
01273   slower.
01274 */
01275 
01276 
01277 /* M_MXFAST is a standard SVID/XPG tuning option, usually listed in malloc.h */
01278 #ifndef M_MXFAST
01279 #define M_MXFAST            1    
01280 #endif
01281 
01282 #ifndef DEFAULT_MXFAST
01283 #define DEFAULT_MXFAST     64
01284 #endif
01285 
01286 
01287 /*
01288   M_TRIM_THRESHOLD is the maximum amount of unused top-most memory
01289   to keep before releasing via malloc_trim in free().
01290 
01291   Automatic trimming is mainly useful in long-lived programs.
01292   Because trimming via sbrk can be slow on some systems, and can
01293   sometimes be wasteful (in cases where programs immediately
01294   afterward allocate more large chunks) the value should be high
01295   enough so that your overall system performance would improve by
01296   releasing this much memory.
01297 
01298   The trim threshold and the mmap control parameters (see below)
01299   can be traded off with one another. Trimming and mmapping are
01300   two different ways of releasing unused memory back to the
01301   system. Between these two, it is often possible to keep
01302   system-level demands of a long-lived program down to a bare
01303   minimum. For example, in one test suite of sessions measuring
01304   the XF86 X server on Linux, using a trim threshold of 128K and a
01305   mmap threshold of 192K led to near-minimal long term resource
01306   consumption.
01307 
01308   If you are using this malloc in a long-lived program, it should
01309   pay to experiment with these values.  As a rough guide, you
01310   might set to a value close to the average size of a process
01311   (program) running on your system.  Releasing this much memory
01312   would allow such a process to run in memory.  Generally, it's
01313   worth it to tune for trimming rather tham memory mapping when a
01314   program undergoes phases where several large chunks are
01315   allocated and released in ways that can reuse each other's
01316   storage, perhaps mixed with phases where there are no such
01317   chunks at all.  And in well-behaved long-lived programs,
01318   controlling release of large blocks via trimming versus mapping
01319   is usually faster.
01320 
01321   However, in most programs, these parameters serve mainly as
01322   protection against the system-level effects of carrying around
01323   massive amounts of unneeded memory. Since frequent calls to
01324   sbrk, mmap, and munmap otherwise degrade performance, the default
01325   parameters are set to relatively high values that serve only as
01326   safeguards.
01327 
01328   The trim value It must be greater than page size to have any useful
01329   effect.  To disable trimming completely, you can set to 
01330   (unsigned long)(-1)
01331 
01332   Trim settings interact with fastbin (MXFAST) settings: Unless
01333   TRIM_FASTBINS is defined, automatic trimming never takes place upon
01334   freeing a chunk with size less than or equal to MXFAST. Trimming is
01335   instead delayed until subsequent freeing of larger chunks. However,
01336   you can still force an attempted trim by calling malloc_trim.
01337 
01338   Also, trimming is not generally possible in cases where
01339   the main arena is obtained via mmap.
01340 
01341   Note that the trick some people use of mallocing a huge space and
01342   then freeing it at program startup, in an attempt to reserve system
01343   memory, doesn't have the intended effect under automatic trimming,
01344   since that memory will immediately be returned to the system.
01345 */
01346 
01347 #define M_TRIM_THRESHOLD       -1
01348 
01349 #ifndef DEFAULT_TRIM_THRESHOLD
01350 #define DEFAULT_TRIM_THRESHOLD (1024 * 1024 * 1024) /* (128 * 1024) EDB */
01351 #endif
01352 
01353 /*
01354   M_TOP_PAD is the amount of extra `padding' space to allocate or
01355   retain whenever sbrk is called. It is used in two ways internally:
01356 
01357   * When sbrk is called to extend the top of the arena to satisfy
01358   a new malloc request, this much padding is added to the sbrk
01359   request.
01360 
01361   * When malloc_trim is called automatically from free(),
01362   it is used as the `pad' argument.
01363 
01364   In both cases, the actual amount of padding is rounded
01365   so that the end of the arena is always a system page boundary.
01366 
01367   The main reason for using padding is to avoid calling sbrk so
01368   often. Having even a small pad greatly reduces the likelihood
01369   that nearly every malloc request during program start-up (or
01370   after trimming) will invoke sbrk, which needlessly wastes
01371   time.
01372 
01373   Automatic rounding-up to page-size units is normally sufficient
01374   to avoid measurable overhead, so the default is 0.  However, in
01375   systems where sbrk is relatively slow, it can pay to increase
01376   this value, at the expense of carrying around more memory than
01377   the program needs.
01378 */
01379 
01380 #define M_TOP_PAD              -2
01381 
01382 #ifndef DEFAULT_TOP_PAD
01383 #define DEFAULT_TOP_PAD        (0)
01384 #endif
01385 
01386 /*
01387   M_MMAP_THRESHOLD is the request size threshold for using mmap()
01388   to service a request. Requests of at least this size that cannot
01389   be allocated using already-existing space will be serviced via mmap.
01390   (If enough normal freed space already exists it is used instead.)
01391 
01392   Using mmap segregates relatively large chunks of memory so that
01393   they can be individually obtained and released from the host
01394   system. A request serviced through mmap is never reused by any
01395   other request (at least not directly; the system may just so
01396   happen to remap successive requests to the same locations).
01397 
01398   Segregating space in this way has the benefits that:
01399 
01400    1. Mmapped space can ALWAYS be individually released back 
01401       to the system, which helps keep the system level memory 
01402       demands of a long-lived program low. 
01403    2. Mapped memory can never become `locked' between
01404       other chunks, as can happen with normally allocated chunks, which
01405       means that even trimming via malloc_trim would not release them.
01406    3. On some systems with "holes" in address spaces, mmap can obtain
01407       memory that sbrk cannot.
01408 
01409   However, it has the disadvantages that:
01410 
01411    1. The space cannot be reclaimed, consolidated, and then
01412       used to service later requests, as happens with normal chunks.
01413    2. It can lead to more wastage because of mmap page alignment
01414       requirements
01415    3. It causes malloc performance to be more dependent on host
01416       system memory management support routines which may vary in
01417       implementation quality and may impose arbitrary
01418       limitations. Generally, servicing a request via normal
01419       malloc steps is faster than going through a system's mmap.
01420 
01421   The advantages of mmap nearly always outweigh disadvantages for
01422   "large" chunks, but the value of "large" varies across systems.  The
01423   default is an empirically derived value that works well in most
01424   systems.
01425 */
01426 
01427 #define M_MMAP_THRESHOLD      -3
01428 
01429 #ifndef DEFAULT_MMAP_THRESHOLD
01430 #define DEFAULT_MMAP_THRESHOLD (128 * 1024)
01431 #endif
01432 
01433 /*
01434   M_MMAP_MAX is the maximum number of requests to simultaneously
01435   service using mmap. This parameter exists because
01436 . Some systems have a limited number of internal tables for
01437   use by mmap, and using more than a few of them may degrade
01438   performance.
01439 
01440   The default is set to a value that serves only as a safeguard.
01441   Setting to 0 disables use of mmap for servicing large requests.  If
01442   HAVE_MMAP is not set, the default value is 0, and attempts to set it
01443   to non-zero values in mallopt will fail.
01444 */
01445 
01446 #define M_MMAP_MAX             -4
01447 
01448 #ifndef DEFAULT_MMAP_MAX
01449 #if HAVE_MMAP
01450 #define DEFAULT_MMAP_MAX       (65536)
01451 #else
01452 #define DEFAULT_MMAP_MAX       (0)
01453 #endif
01454 #endif
01455 
01456 #ifdef __cplusplus
01457 };  /* end of extern "C" */
01458 #endif
01459 
01460 /* 
01461   ========================================================================
01462   To make a fully customizable malloc.h header file, cut everything
01463   above this line, put into file malloc.h, edit to suit, and #include it 
01464   on the next line, as well as in programs that use this malloc.
01465   ========================================================================
01466 */
01467 
01468 /* #include "malloc.h" */
01469 
01470 /* --------------------- public wrappers ---------------------- */
01471 
01472 #ifdef USE_PUBLIC_MALLOC_WRAPPERS
01473 
01474 /* Declare all routines as internal */
01475 #if __STD_C
01476 static Void_t*  mALLOc(size_t);
01477 static void     fREe(Void_t*);
01478 static Void_t*  rEALLOc(Void_t*, size_t);
01479 static Void_t*  mEMALIGn(size_t, size_t);
01480 static Void_t*  vALLOc(size_t);
01481 static Void_t*  pVALLOc(size_t);
01482 static Void_t*  cALLOc(size_t, size_t);
01483 static Void_t** iCALLOc(size_t, size_t, Void_t**);
01484 static Void_t** iCOMALLOc(size_t, size_t*, Void_t**);
01485 static void     cFREe(Void_t*);
01486 static int      mTRIm(size_t);
01487 static size_t   mUSABLe(Void_t*);
01488 static void     mSTATs();
01489 static int      mALLOPt(int, int);
01490 static struct mallinfo mALLINFo(void);
01491 #else
01492 static Void_t*  mALLOc();
01493 static void     fREe();
01494 static Void_t*  rEALLOc();
01495 static Void_t*  mEMALIGn();
01496 static Void_t*  vALLOc();
01497 static Void_t*  pVALLOc();
01498 static Void_t*  cALLOc();
01499 static Void_t** iCALLOc();
01500 static Void_t** iCOMALLOc();
01501 static void     cFREe();
01502 static int      mTRIm();
01503 static size_t   mUSABLe();
01504 static void     mSTATs();
01505 static int      mALLOPt();
01506 static struct mallinfo mALLINFo();
01507 #endif
01508 
01509 /*
01510   MALLOC_PREACTION and MALLOC_POSTACTION should be
01511   defined to return 0 on success, and nonzero on failure.
01512   The return value of MALLOC_POSTACTION is currently ignored
01513   in wrapper functions since there is no reasonable default
01514   action to take on failure.
01515 */
01516 
01517 
01518 #ifdef USE_MALLOC_LOCK
01519 
01520 #ifdef WIN32
01521 
01522 static int mALLOC_MUTEx;
01523 #define MALLOC_PREACTION   slwait(&mALLOC_MUTEx)
01524 #define MALLOC_POSTACTION  slrelease(&mALLOC_MUTEx)
01525 
01526 #else
01527 
01528 #include <pthread.h>
01529 
01530 static pthread_mutex_t mALLOC_MUTEx = PTHREAD_MUTEX_INITIALIZER;
01531 
01532 #define MALLOC_PREACTION   pthread_mutex_lock(&mALLOC_MUTEx)
01533 #define MALLOC_POSTACTION  pthread_mutex_unlock(&mALLOC_MUTEx)
01534 
01535 #endif /* USE_MALLOC_LOCK */
01536 
01537 #else
01538 
01539 /* Substitute anything you like for these */
01540 
01541 #define MALLOC_PREACTION   (0)
01542 #define MALLOC_POSTACTION  (0)
01543 
01544 #endif
01545 
01546 Void_t* public_mALLOc(size_t bytes) {
01547   Void_t* m;
01548   if (MALLOC_PREACTION != 0) {
01549     return 0;
01550   }
01551   m = mALLOc(bytes);
01552   if (MALLOC_POSTACTION != 0) {
01553   }
01554   return m;
01555 }
01556 
01557 void public_fREe(Void_t* m) {
01558   if (MALLOC_PREACTION != 0) {
01559     return;
01560   }
01561   fREe(m);
01562   if (MALLOC_POSTACTION != 0) {
01563   }
01564 }
01565 
01566 Void_t* public_rEALLOc(Void_t* m, size_t bytes) {
01567   if (MALLOC_PREACTION != 0) {
01568     return 0;
01569   }
01570   m = rEALLOc(m, bytes);
01571   if (MALLOC_POSTACTION != 0) {
01572   }
01573   return m;
01574 }
01575 
01576 Void_t* public_mEMALIGn(size_t alignment, size_t bytes) {
01577   Void_t* m;
01578   if (MALLOC_PREACTION != 0) {
01579     return 0;
01580   }
01581   m = mEMALIGn(alignment, bytes);
01582   if (MALLOC_POSTACTION != 0) {
01583   }
01584   return m;
01585 }
01586 
01587 Void_t* public_vALLOc(size_t bytes) {
01588   Void_t* m;
01589   if (MALLOC_PREACTION != 0) {
01590     return 0;
01591   }
01592   m = vALLOc(bytes);
01593   if (MALLOC_POSTACTION != 0) {
01594   }
01595   return m;
01596 }
01597 
01598 Void_t* public_pVALLOc(size_t bytes) {
01599   Void_t* m;
01600   if (MALLOC_PREACTION != 0) {
01601     return 0;
01602   }
01603   m = pVALLOc(bytes);
01604   if (MALLOC_POSTACTION != 0) {
01605   }
01606   return m;
01607 }
01608 
01609 Void_t* public_cALLOc(size_t n, size_t elem_size) {
01610   Void_t* m;
01611   if (MALLOC_PREACTION != 0) {
01612     return 0;
01613   }
01614   m = cALLOc(n, elem_size);
01615   if (MALLOC_POSTACTION != 0) {
01616   }
01617   return m;
01618 }
01619 
01620 
01621 Void_t** public_iCALLOc(size_t n, size_t elem_size, Void_t** chunks) {
01622   Void_t** m;
01623   if (MALLOC_PREACTION != 0) {
01624     return 0;
01625   }
01626   m = iCALLOc(n, elem_size, chunks);
01627   if (MALLOC_POSTACTION != 0) {
01628   }
01629   return m;
01630 }
01631 
01632 Void_t** public_iCOMALLOc(size_t n, size_t sizes[], Void_t** chunks) {
01633   Void_t** m;
01634   if (MALLOC_PREACTION != 0) {
01635     return 0;
01636   }
01637   m = iCOMALLOc(n, sizes, chunks);
01638   if (MALLOC_POSTACTION != 0) {
01639   }
01640   return m;
01641 }
01642 
01643 void public_cFREe(Void_t* m) {
01644   if (MALLOC_PREACTION != 0) {
01645     return;
01646   }
01647   cFREe(m);
01648   if (MALLOC_POSTACTION != 0) {
01649   }
01650 }
01651 
01652 int public_mTRIm(size_t s) {
01653   int result;
01654   if (MALLOC_PREACTION != 0) {
01655     return 0;
01656   }
01657   result = mTRIm(s);
01658   if (MALLOC_POSTACTION != 0) {
01659   }
01660   return result;
01661 }
01662 
01663 size_t public_mUSABLe(Void_t* m) {
01664   size_t result;
01665   if (MALLOC_PREACTION != 0) {
01666     return 0;
01667   }
01668   result = mUSABLe(m);
01669   if (MALLOC_POSTACTION != 0) {
01670   }
01671   return result;
01672 }
01673 
01674 void public_mSTATs() {
01675   if (MALLOC_PREACTION != 0) {
01676     return;
01677   }
01678   mSTATs();
01679   if (MALLOC_POSTACTION != 0) {
01680   }
01681 }
01682 
01683 struct mallinfo public_mALLINFo() {
01684   struct mallinfo m;
01685   if (MALLOC_PREACTION != 0) {
01686     struct mallinfo nm = { 0, 0, 0, 0, 0, 0, 0, 0, 0, 0 };
01687     return nm;
01688   }
01689   m = mALLINFo();
01690   if (MALLOC_POSTACTION != 0) {
01691   }
01692   return m;
01693 }
01694 
01695 int public_mALLOPt(int p, int v) {
01696   int result;
01697   if (MALLOC_PREACTION != 0) {
01698     return 0;
01699   }
01700   result = mALLOPt(p, v);
01701   if (MALLOC_POSTACTION != 0) {
01702   }
01703   return result;
01704 }
01705 
01706 #endif
01707 
01708 
01709 
01710 /* ------------- Optional versions of memcopy ---------------- */
01711 
01712 
01713 #if USE_MEMCPY
01714 
01715 /* 
01716   Note: memcpy is ONLY invoked with non-overlapping regions,
01717   so the (usually slower) memmove is not needed.
01718 */
01719 
01720 #define MALLOC_COPY(dest, src, nbytes)  memcpy(dest, src, nbytes)
01721 #define MALLOC_ZERO(dest, nbytes)       memset(dest, 0,   nbytes)
01722 
01723 #else /* !USE_MEMCPY */
01724 
01725 /* Use Duff's device for good zeroing/copying performance. */
01726 
01727 #define MALLOC_ZERO(charp, nbytes)                                            \
01728 do {                                                                          \
01729   INTERNAL_SIZE_T* mzp = (INTERNAL_SIZE_T*)(charp);                           \
01730   unsigned long mctmp = (nbytes)/sizeof(INTERNAL_SIZE_T);                     \
01731   long mcn;                                                                   \
01732   if (mctmp < 8) mcn = 0; else { mcn = (mctmp-1)/8; mctmp %= 8; }             \
01733   switch (mctmp) {                                                            \
01734     case 0: for(;;) { *mzp++ = 0;                                             \
01735     case 7:           *mzp++ = 0;                                             \
01736     case 6:           *mzp++ = 0;                                             \
01737     case 5:           *mzp++ = 0;                                             \
01738     case 4:           *mzp++ = 0;                                             \
01739     case 3:           *mzp++ = 0;                                             \
01740     case 2:           *mzp++ = 0;                                             \
01741     case 1:           *mzp++ = 0; if(mcn <= 0) break; mcn--; }                \
01742   }                                                                           \
01743 } while(0)
01744 
01745 #define MALLOC_COPY(dest,src,nbytes)                                           \
01746 do {                                                                          \
01747   INTERNAL_SIZE_T* mcsrc = (INTERNAL_SIZE_T*) src;                            \
01748   INTERNAL_SIZE_T* mcdst = (INTERNAL_SIZE_T*) dest;                           \
01749   unsigned long mctmp = (nbytes)/sizeof(INTERNAL_SIZE_T);                     \
01750   long mcn;                                                                   \
01751   if (mctmp < 8) mcn = 0; else { mcn = (mctmp-1)/8; mctmp %= 8; }             \
01752   switch (mctmp) {                                                            \
01753     case 0: for(;;) { *mcdst++ = *mcsrc++;                                    \
01754     case 7:           *mcdst++ = *mcsrc++;                                    \
01755     case 6:           *mcdst++ = *mcsrc++;                                    \
01756     case 5:           *mcdst++ = *mcsrc++;                                    \
01757     case 4:           *mcdst++ = *mcsrc++;                                    \
01758     case 3:           *mcdst++ = *mcsrc++;                                    \
01759     case 2:           *mcdst++ = *mcsrc++;                                    \
01760     case 1:           *mcdst++ = *mcsrc++; if(mcn <= 0) break; mcn--; }       \
01761   }                                                                           \
01762 } while(0)
01763 
01764 #endif
01765 
01766 /* ------------------ MMAP support ------------------  */
01767 
01768 
01769 #if HAVE_MMAP
01770 
01771 #include <fcntl.h>
01772 #ifndef LACKS_SYS_MMAN_H
01773 #include <sys/mman.h>
01774 #endif
01775 
01776 #if !defined(MAP_ANONYMOUS) && defined(MAP_ANON)
01777 #define MAP_ANONYMOUS MAP_ANON
01778 #endif
01779 
01780 /* 
01781    Nearly all versions of mmap support MAP_ANONYMOUS, 
01782    so the following is unlikely to be needed, but is
01783    supplied just in case.
01784 */
01785 
01786 #ifndef MAP_ANONYMOUS
01787 
01788 static int dev_zero_fd = -1; /* Cached file descriptor for /dev/zero. */
01789 
01790 #define MMAP(addr, size, prot, flags) ((dev_zero_fd < 0) ? \
01791  (dev_zero_fd = open("/dev/zero", O_RDWR), \
01792   mmap((addr), (size), (prot), (flags), dev_zero_fd, 0)) : \
01793    mmap((addr), (size), (prot), (flags), dev_zero_fd, 0))
01794 
01795 #else
01796 
01797 #define MMAP(addr, size, prot, flags) \
01798  (mmap((addr), (size), (prot), (flags)|MAP_ANONYMOUS, -1, 0))
01799 
01800 #endif
01801 
01802 
01803 #endif /* HAVE_MMAP */
01804 
01805 
01806 /*
01807   -----------------------  Chunk representations -----------------------
01808 */
01809 
01810 
01811 /*
01812   This struct declaration is misleading (but accurate and necessary).
01813   It declares a "view" into memory allowing access to necessary
01814   fields at known offsets from a given base. See explanation below.
01815 */
01816 
01817 struct malloc_chunk {
01818 
01819   INTERNAL_SIZE_T      prev_size;  /* Size of previous chunk (if free).  */
01820   INTERNAL_SIZE_T      size;       /* Size in bytes, including overhead. */
01821 
01822   struct malloc_chunk* fd;         /* double links -- used only if free. */
01823   struct malloc_chunk* bk;
01824 };
01825 
01826 
01827 typedef struct malloc_chunk* mchunkptr;
01828 
01829 /*
01830    malloc_chunk details:
01831 
01832     (The following includes lightly edited explanations by Colin Plumb.)
01833 
01834     Chunks of memory are maintained using a `boundary tag' method as
01835     described in e.g., Knuth or Standish.  (See the paper by Paul
01836     Wilson ftp://ftp.cs.utexas.edu/pub/garbage/allocsrv.ps for a
01837     survey of such techniques.)  Sizes of free chunks are stored both
01838     in the front of each chunk and at the end.  This makes
01839     consolidating fragmented chunks into bigger chunks very fast.  The
01840     size fields also hold bits representing whether chunks are free or
01841     in use.
01842 
01843     An allocated chunk looks like this:
01844 
01845 
01846     chunk-> +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
01847             |             Size of previous chunk, if allocated            | |
01848             +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
01849             |             Size of chunk, in bytes                         |P|
01850       mem-> +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
01851             |             User data starts here...                          .
01852             .                                                               .
01853             .             (malloc_usable_space() bytes)                     .
01854             .                                                               |
01855 nextchunk-> +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
01856             |             Size of chunk                                     |
01857             +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
01858 
01859 
01860     Where "chunk" is the front of the chunk for the purpose of most of
01861     the malloc code, but "mem" is the pointer that is returned to the
01862     user.  "Nextchunk" is the beginning of the next contiguous chunk.
01863 
01864     Chunks always begin on even word boundries, so the mem portion
01865     (which is returned to the user) is also on an even word boundary, and
01866     thus at least double-word aligned.
01867 
01868     Free chunks are stored in circular doubly-linked lists, and look like this:
01869 
01870     chunk-> +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
01871             |             Size of previous chunk                            |
01872             +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
01873     `head:' |             Size of chunk, in bytes                         |P|
01874       mem-> +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
01875             |             Forward pointer to next chunk in list             |
01876             +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
01877             |             Back pointer to previous chunk in list            |
01878             +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
01879             |             Unused space (may be 0 bytes long)                .
01880             .                                                               .
01881             .                                                               |
01882 nextchunk-> +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
01883     `foot:' |             Size of chunk, in bytes                           |
01884             +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
01885 
01886     The P (PREV_INUSE) bit, stored in the unused low-order bit of the
01887     chunk size (which is always a multiple of two words), is an in-use
01888     bit for the *previous* chunk.  If that bit is *clear*, then the
01889     word before the current chunk size contains the previous chunk
01890     size, and can be used to find the front of the previous chunk.
01891     The very first chunk allocated always has this bit set,
01892     preventing access to non-existent (or non-owned) memory. If
01893     prev_inuse is set for any given chunk, then you CANNOT determine
01894     the size of the previous chunk, and might even get a memory
01895     addressing fault when trying to do so.
01896 
01897     Note that the `foot' of the current chunk is actually represented
01898     as the prev_size of the NEXT chunk. This makes it easier to
01899     deal with alignments etc but can be very confusing when trying
01900     to extend or adapt this code.
01901 
01902     The two exceptions to all this are
01903 
01904      1. The special chunk `top' doesn't bother using the
01905         trailing size field since there is no next contiguous chunk
01906         that would have to index off it. After initialization, `top'
01907         is forced to always exist.  If it would become less than
01908         MINSIZE bytes long, it is replenished.
01909 
01910      2. Chunks allocated via mmap, which have the second-lowest-order
01911         bit (IS_MMAPPED) set in their size fields.  Because they are
01912         allocated one-by-one, each must contain its own trailing size field.
01913 
01914 */
01915 
01916 /*
01917   ---------- Size and alignment checks and conversions ----------
01918 */
01919 
01920 /* conversion from malloc headers to user pointers, and back */
01921 
01922 #define chunk2mem(p)   ((Void_t*)((char*)(p) + 2*SIZE_SZ))
01923 #define mem2chunk(mem) ((mchunkptr)((char*)(mem) - 2*SIZE_SZ))
01924 
01925 /* The smallest possible chunk */
01926 #define MIN_CHUNK_SIZE        (sizeof(struct malloc_chunk))
01927 
01928 /* The smallest size we can malloc is an aligned minimal chunk */
01929 
01930 #define MINSIZE  \
01931   (unsigned long)(((MIN_CHUNK_SIZE+MALLOC_ALIGN_MASK) & ~MALLOC_ALIGN_MASK))
01932 
01933 /* Check if m has acceptable alignment */
01934 
01935 #define aligned_OK(m)  (((unsigned long)((m)) & (MALLOC_ALIGN_MASK)) == 0)
01936 
01937 
01938 /* 
01939    Check if a request is so large that it would wrap around zero when
01940    padded and aligned. To simplify some other code, the bound is made
01941    low enough so that adding MINSIZE will also not wrap around sero.
01942 */
01943 
01944 #define REQUEST_OUT_OF_RANGE(req)                                 \
01945   ((unsigned long)(req) >=                                        \
01946    (unsigned long)(INTERNAL_SIZE_T)(-2 * MINSIZE))    
01947 
01948 /* pad request bytes into a usable size -- internal version */
01949 
01950 #define request2size(req)                                         \
01951   (((req) + SIZE_SZ + MALLOC_ALIGN_MASK < MINSIZE)  ?             \
01952    MINSIZE :                                                      \
01953    ((req) + SIZE_SZ + MALLOC_ALIGN_MASK) & ~MALLOC_ALIGN_MASK)
01954 
01955 /*  Same, except also perform argument check */
01956 
01957 #define checked_request2size(req, sz)                             \
01958   if (REQUEST_OUT_OF_RANGE(req)) {                                \
01959     MALLOC_FAILURE_ACTION;                                        \
01960     return 0;                                                     \
01961   }                                                               \
01962   (sz) = request2size(req);                                              
01963 
01964 /*
01965   --------------- Physical chunk operations ---------------
01966 */
01967 
01968 
01969 /* size field is or'ed with PREV_INUSE when previous adjacent chunk in use */
01970 #define PREV_INUSE 0x1
01971 
01972 /* extract inuse bit of previous chunk */
01973 #define prev_inuse(p)       ((p)->size & PREV_INUSE)
01974 
01975 
01976 /* size field is or'ed with IS_MMAPPED if the chunk was obtained with mmap() */
01977 #define IS_MMAPPED 0x2
01978 
01979 /* check for mmap()'ed chunk */
01980 #define chunk_is_mmapped(p) ((p)->size & IS_MMAPPED)
01981 
01982 /* 
01983   Bits to mask off when extracting size 
01984 
01985   Note: IS_MMAPPED is intentionally not masked off from size field in
01986   macros for which mmapped chunks should never be seen. This should
01987   cause helpful core dumps to occur if it is tried by accident by
01988   people extending or adapting this malloc.
01989 */
01990 #define SIZE_BITS (PREV_INUSE|IS_MMAPPED)
01991 
01992 /* Get size, ignoring use bits */
01993 #define chunksize(p)         ((p)->size & ~(SIZE_BITS))
01994 
01995 
01996 /* Ptr to next physical malloc_chunk. */
01997 #define next_chunk(p) ((mchunkptr)( ((char*)(p)) + ((p)->size & ~PREV_INUSE) ))
01998 
01999 /* Ptr to previous physical malloc_chunk */
02000 #define prev_chunk(p) ((mchunkptr)( ((char*)(p)) - ((p)->prev_size) ))
02001 
02002 /* Treat space at ptr + offset as a chunk */
02003 #define chunk_at_offset(p, s)  ((mchunkptr)(((char*)(p)) + (s)))
02004 
02005 /* extract p's inuse bit */
02006 #define inuse(p)\
02007 ((((mchunkptr)(((char*)(p))+((p)->size & ~PREV_INUSE)))->size) & PREV_INUSE)
02008 
02009 /* set/clear chunk as being inuse without otherwise disturbing */
02010 #define set_inuse(p)\
02011 ((mchunkptr)(((char*)(p)) + ((p)->size & ~PREV_INUSE)))->size |= PREV_INUSE
02012 
02013 #define clear_inuse(p)\
02014 ((mchunkptr)(((char*)(p)) + ((p)->size & ~PREV_INUSE)))->size &= ~(PREV_INUSE)
02015 
02016 
02017 /* check/set/clear inuse bits in known places */
02018 #define inuse_bit_at_offset(p, s)\
02019  (((mchunkptr)(((char*)(p)) + (s)))->size & PREV_INUSE)
02020 
02021 #define set_inuse_bit_at_offset(p, s)\
02022  (((mchunkptr)(((char*)(p)) + (s)))->size |= PREV_INUSE)
02023 
02024 #define clear_inuse_bit_at_offset(p, s)\
02025  (((mchunkptr)(((char*)(p)) + (s)))->size &= ~(PREV_INUSE))
02026 
02027 
02028 /* Set size at head, without disturbing its use bit */
02029 #define set_head_size(p, s)  ((p)->size = (((p)->size & PREV_INUSE) | (s)))
02030 
02031 /* Set size/use field */
02032 #define set_head(p, s)       ((p)->size = (s))
02033 
02034 /* Set size at footer (only when chunk is not in use) */
02035 #define set_foot(p, s)       (((mchunkptr)((char*)(p) + (s)))->prev_size = (s))
02036 
02037 
02038 /*
02039   -------------------- Internal data structures --------------------
02040 
02041    All internal state is held in an instance of malloc_state defined
02042    below. There are no other static variables, except in two optional
02043    cases: 
02044    * If USE_MALLOC_LOCK is defined, the mALLOC_MUTEx declared above. 
02045    * If HAVE_MMAP is true, but mmap doesn't support
02046      MAP_ANONYMOUS, a dummy file descriptor for mmap.
02047 
02048    Beware of lots of tricks that minimize the total bookkeeping space
02049    requirements. The result is a little over 1K bytes (for 4byte
02050    pointers and size_t.)
02051 */
02052 
02053 /*
02054   Bins
02055 
02056     An array of bin headers for free chunks. Each bin is doubly
02057     linked.  The bins are approximately proportionally (log) spaced.
02058     There are a lot of these bins (128). This may look excessive, but
02059     works very well in practice.  Most bins hold sizes that are
02060     unusual as malloc request sizes, but are more usual for fragments
02061     and consolidated sets of chunks, which is what these bins hold, so
02062     they can be found quickly.  All procedures maintain the invariant
02063     that no consolidated chunk physically borders another one, so each
02064     chunk in a list is known to be preceeded and followed by either
02065     inuse chunks or the ends of memory.
02066 
02067     Chunks in bins are kept in size order, with ties going to the
02068     approximately least recently used chunk. Ordering isn't needed
02069     for the small bins, which all contain the same-sized chunks, but
02070     facilitates best-fit allocation for larger chunks. These lists
02071     are just sequential. Keeping them in order almost never requires
02072     enough traversal to warrant using fancier ordered data
02073     structures.  
02074 
02075     Chunks of the same size are linked with the most
02076     recently freed at the front, and allocations are taken from the
02077     back.  This results in LRU (FIFO) allocation order, which tends
02078     to give each chunk an equal opportunity to be consolidated with
02079     adjacent freed chunks, resulting in larger free chunks and less
02080     fragmentation.
02081 
02082     To simplify use in double-linked lists, each bin header acts
02083     as a malloc_chunk. This avoids special-casing for headers.
02084     But to conserve space and improve locality, we allocate
02085     only the fd/bk pointers of bins, and then use repositioning tricks
02086     to treat these as the fields of a malloc_chunk*.  
02087 */
02088 
02089 typedef struct malloc_chunk* mbinptr;
02090 
02091 /* addressing -- note that bin_at(0) does not exist */
02092 #define bin_at(m, i) ((mbinptr)((char*)&((m)->bins[(i)<<1]) - (SIZE_SZ<<1)))
02093 
02094 /* analog of ++bin */
02095 #define next_bin(b)  ((mbinptr)((char*)(b) + (sizeof(mchunkptr)<<1)))
02096 
02097 /* Reminders about list directionality within bins */
02098 #define first(b)     ((b)->fd)
02099 #define last(b)      ((b)->bk)
02100 
02101 /* Take a chunk off a bin list */
02102 #define unlink(P, BK, FD) {                                            \
02103   FD = P->fd;                                                          \
02104   BK = P->bk;                                                          \
02105   FD->bk = BK;                                                         \
02106   BK->fd = FD;                                                         \
02107 }
02108 
02109 /*
02110   Indexing
02111 
02112     Bins for sizes < 512 bytes contain chunks of all the same size, spaced
02113     8 bytes apart. Larger bins are approximately logarithmically spaced:
02114 
02115     64 bins of size       8
02116     32 bins of size      64
02117     16 bins of size     512
02118      8 bins of size    4096
02119      4 bins of size   32768
02120      2 bins of size  262144
02121      1 bin  of size what's left
02122 
02123     There is actually a little bit of slop in the numbers in bin_index
02124     for the sake of speed. This makes no difference elsewhere.
02125 
02126     The bins top out around 1MB because we expect to service large
02127     requests via mmap.
02128 */
02129 
02130 #define NBINS             128
02131 #define NSMALLBINS         64
02132 #define SMALLBIN_WIDTH      8
02133 #define MIN_LARGE_SIZE    512
02134 
02135 #define in_smallbin_range(sz)  \
02136   ((unsigned long)(sz) < (unsigned long)MIN_LARGE_SIZE)
02137 
02138 #define smallbin_index(sz)     (((unsigned)(sz)) >> 3)
02139 
02140 #define largebin_index(sz)                                                   \
02141 (((((unsigned long)(sz)) >>  6) <= 32)?  56 + (((unsigned long)(sz)) >>  6): \
02142  ((((unsigned long)(sz)) >>  9) <= 20)?  91 + (((unsigned long)(sz)) >>  9): \
02143  ((((unsigned long)(sz)) >> 12) <= 10)? 110 + (((unsigned long)(sz)) >> 12): \
02144  ((((unsigned long)(sz)) >> 15) <=  4)? 119 + (((unsigned long)(sz)) >> 15): \
02145  ((((unsigned long)(sz)) >> 18) <=  2)? 124 + (((unsigned long)(sz)) >> 18): \
02146                                         126)
02147 
02148 #define bin_index(sz) \
02149  ((in_smallbin_range(sz)) ? smallbin_index(sz) : largebin_index(sz))
02150 
02151 
02152 /*
02153   Unsorted chunks
02154 
02155     All remainders from chunk splits, as well as all returned chunks,
02156     are first placed in the "unsorted" bin. They are then placed
02157     in regular bins after malloc gives them ONE chance to be used before
02158     binning. So, basically, the unsorted_chunks list acts as a queue,
02159     with chunks being placed on it in free (and malloc_consolidate),
02160     and taken off (to be either used or placed in bins) in malloc.
02161 */
02162 
02163 /* The otherwise unindexable 1-bin is used to hold unsorted chunks. */
02164 #define unsorted_chunks(M)          (bin_at(M, 1))
02165 
02166 /*
02167   Top
02168 
02169     The top-most available chunk (i.e., the one bordering the end of
02170     available memory) is treated specially. It is never included in
02171     any bin, is used only if no other chunk is available, and is
02172     released back to the system if it is very large (see
02173     M_TRIM_THRESHOLD).  Because top initially
02174     points to its own bin with initial zero size, thus forcing
02175     extension on the first malloc request, we avoid having any special
02176     code in malloc to check whether it even exists yet. But we still
02177     need to do so when getting memory from system, so we make
02178     initial_top treat the bin as a legal but unusable chunk during the
02179     interval between initialization and the first call to
02180     sYSMALLOc. (This is somewhat delicate, since it relies on
02181     the 2 preceding words to be zero during this interval as well.)
02182 */
02183 
02184 /* Conveniently, the unsorted bin can be used as dummy top on first call */
02185 #define initial_top(M)              (unsorted_chunks(M))
02186 
02187 /*
02188   Binmap
02189 
02190     To help compensate for the large number of bins, a one-level index
02191     structure is used for bin-by-bin searching.  `binmap' is a
02192     bitvector recording whether bins are definitely empty so they can
02193     be skipped over during during traversals.  The bits are NOT always
02194     cleared as soon as bins are empty, but instead only
02195     when they are noticed to be empty during traversal in malloc.
02196 */
02197 
02198 /* Conservatively use 32 bits per map word, even if on 64bit system */
02199 #define BINMAPSHIFT      5
02200 #define BITSPERMAP       (1U << BINMAPSHIFT)
02201 #define BINMAPSIZE       (NBINS / BITSPERMAP)
02202 
02203 #define idx2block(i)     ((i) >> BINMAPSHIFT)
02204 #define idx2bit(i)       ((1U << ((i) & ((1U << BINMAPSHIFT)-1))))
02205 
02206 #define mark_bin(m,i)    ((m)->binmap[idx2block(i)] |=  idx2bit(i))
02207 #define unmark_bin(m,i)  ((m)->binmap[idx2block(i)] &= ~(idx2bit(i)))
02208 #define get_binmap(m,i)  ((m)->binmap[idx2block(i)] &   idx2bit(i))
02209 
02210 /*
02211   Fastbins
02212 
02213     An array of lists holding recently freed small chunks.  Fastbins
02214     are not doubly linked.  It is faster to single-link them, and
02215     since chunks are never removed from the middles of these lists,
02216     double linking is not necessary. Also, unlike regular bins, they
02217     are not even processed in FIFO order (they use faster LIFO) since
02218     ordering doesn't much matter in the transient contexts in which
02219     fastbins are normally used.
02220 
02221     Chunks in fastbins keep their inuse bit set, so they cannot
02222     be consolidated with other free chunks. malloc_consolidate
02223     releases all chunks in fastbins and consolidates them with
02224     other free chunks. 
02225 */
02226 
02227 typedef struct malloc_chunk* mfastbinptr;
02228 
02229 /* offset 2 to use otherwise unindexable first 2 bins */
02230 #define fastbin_index(sz)        ((((unsigned int)(sz)) >> 3) - 2)
02231 
02232 /* The maximum fastbin request size we support */
02233 #define MAX_FAST_SIZE     80
02234 
02235 #define NFASTBINS  (fastbin_index(request2size(MAX_FAST_SIZE))+1)
02236 
02237 /*
02238   FASTBIN_CONSOLIDATION_THRESHOLD is the size of a chunk in free()
02239   that triggers automatic consolidation of possibly-surrounding
02240   fastbin chunks. This is a heuristic, so the exact value should not
02241   matter too much. It is defined at half the default trim threshold as a
02242   compromise heuristic to only attempt consolidation if it is likely
02243   to lead to trimming. However, it is not dynamically tunable, since
02244   consolidation reduces fragmentation surrounding loarge chunks even 
02245   if trimming is not used.
02246 */
02247 
02248 #define FASTBIN_CONSOLIDATION_THRESHOLD  (65536UL)
02249 
02250 /*
02251   Since the lowest 2 bits in max_fast don't matter in size comparisons, 
02252   they are used as flags.
02253 */
02254 
02255 /*
02256   FASTCHUNKS_BIT held in max_fast indicates that there are probably
02257   some fastbin chunks. It is set true on entering a chunk into any
02258   fastbin, and cleared only in malloc_consolidate.
02259 
02260   The truth value is inverted so that have_fastchunks will be true
02261   upon startup (since statics are zero-filled), simplifying
02262   initialization checks.
02263 */
02264 
02265 #define FASTCHUNKS_BIT        (1U)
02266 
02267 #define have_fastchunks(M)     (((M)->max_fast &  FASTCHUNKS_BIT) == 0)
02268 #define clear_fastchunks(M)    ((M)->max_fast |=  FASTCHUNKS_BIT)
02269 #define set_fastchunks(M)      ((M)->max_fast &= ~FASTCHUNKS_BIT)
02270 
02271 /*
02272   NONCONTIGUOUS_BIT indicates that MORECORE does not return contiguous
02273   regions.  Otherwise, contiguity is exploited in merging together,
02274   when possible, results from consecutive MORECORE calls.
02275 
02276   The initial value comes from MORECORE_CONTIGUOUS, but is
02277   changed dynamically if mmap is ever used as an sbrk substitute.
02278 */
02279 
02280 #define NONCONTIGUOUS_BIT     (2U)
02281 
02282 #define contiguous(M)          (((M)->max_fast &  NONCONTIGUOUS_BIT) == 0)
02283 #define noncontiguous(M)       (((M)->max_fast &  NONCONTIGUOUS_BIT) != 0)
02284 #define set_noncontiguous(M)   ((M)->max_fast |=  NONCONTIGUOUS_BIT)
02285 #define set_contiguous(M)      ((M)->max_fast &= ~NONCONTIGUOUS_BIT)
02286 
02287 /* 
02288    Set value of max_fast. 
02289    Use impossibly small value if 0.
02290    Precondition: there are no existing fastbin chunks.
02291    Setting the value clears fastchunk bit but preserves noncontiguous bit.
02292 */
02293 
02294 #define set_max_fast(M, s) \
02295   (M)->max_fast = (((s) == 0)? SMALLBIN_WIDTH: request2size(s)) | \
02296   FASTCHUNKS_BIT | \
02297   ((M)->max_fast &  NONCONTIGUOUS_BIT)
02298 
02299 
02300 /*
02301    ----------- Internal state representation and initialization -----------
02302 */
02303 
02304 struct malloc_state {
02305 
02306   /* The maximum chunk size to be eligible for fastbin */
02307   INTERNAL_SIZE_T  max_fast;   /* low 2 bits used as flags */
02308 
02309   /* Fastbins */
02310   mfastbinptr      fastbins[NFASTBINS];
02311 
02312   /* Base of the topmost chunk -- not otherwise kept in a bin */
02313   mchunkptr        top;
02314 
02315   /* The remainder from the most recent split of a small request */
02316   mchunkptr        last_remainder;
02317 
02318   /* Normal bins packed as described above */
02319   mchunkptr        bins[NBINS * 2];
02320 
02321   /* Bitmap of bins */
02322   unsigned int     binmap[BINMAPSIZE];
02323 
02324   /* Tunable parameters */
02325   unsigned long    trim_threshold;
02326   INTERNAL_SIZE_T  top_pad;
02327   INTERNAL_SIZE_T  mmap_threshold;
02328 
02329   /* Memory map support */
02330   int              n_mmaps;
02331   int              n_mmaps_max;
02332   int              max_n_mmaps;
02333 
02334   /* Cache malloc_getpagesize */
02335   unsigned int     pagesize;    
02336 
02337   /* Statistics */
02338   INTERNAL_SIZE_T  mmapped_mem;
02339   INTERNAL_SIZE_T  sbrked_mem;
02340   INTERNAL_SIZE_T  max_sbrked_mem;
02341   INTERNAL_SIZE_T  max_mmapped_mem;
02342   INTERNAL_SIZE_T  max_total_mem;
02343 };
02344 
02345 typedef struct malloc_state *mstate;
02346 
02347 /* 
02348    There is exactly one instance of this struct in this malloc.
02349    If you are adapting this malloc in a way that does NOT use a static
02350    malloc_state, you MUST explicitly zero-fill it before using. This
02351    malloc relies on the property that malloc_state is initialized to
02352    all zeroes (as is true of C statics).
02353 */
02354 
02355 static struct malloc_state av_;  /* never directly referenced */
02356 
02357 /*
02358    All uses of av_ are via get_malloc_state().
02359    At most one "call" to get_malloc_state is made per invocation of
02360    the public versions of malloc and free, but other routines
02361    that in turn invoke malloc and/or free may call more then once. 
02362    Also, it is called in check* routines if DEBUG is set.
02363 */
02364 
02365 #define get_malloc_state() (&(av_))
02366 
02367 /*
02368   Initialize a malloc_state struct.
02369 
02370   This is called only from within malloc_consolidate, which needs
02371   be called in the same contexts anyway.  It is never called directly
02372   outside of malloc_consolidate because some optimizing compilers try
02373   to inline it at all call points, which turns out not to be an
02374   optimization at all. (Inlining it in malloc_consolidate is fine though.)
02375 */
02376 
02377 #if __STD_C
02378 static void malloc_init_state(mstate av)
02379 #else
02380 static void malloc_init_state(av) mstate av;
02381 #endif
02382 {
02383   int     i;
02384   mbinptr bin;
02385   
02386   /* Establish circular links for normal bins */
02387   for (i = 1; i < NBINS; ++i) { 
02388     bin = bin_at(av,i);
02389     bin->fd = bin->bk = bin;
02390   }
02391 
02392   av->top_pad        = DEFAULT_TOP_PAD;
02393   av->n_mmaps_max    = DEFAULT_MMAP_MAX;
02394   av->mmap_threshold = DEFAULT_MMAP_THRESHOLD;
02395   av->trim_threshold = DEFAULT_TRIM_THRESHOLD;
02396 
02397 #if !MORECORE_CONTIGUOUS
02398   set_noncontiguous(av);
02399 #endif
02400 
02401   set_max_fast(av, DEFAULT_MXFAST);
02402 
02403   av->top            = initial_top(av);
02404   av->pagesize       = malloc_getpagesize;
02405 }
02406 
02407 /* 
02408    Other internal utilities operating on mstates
02409 */
02410 
02411 #if __STD_C
02412 static Void_t*  sYSMALLOc(INTERNAL_SIZE_T, mstate);
02413 static int      sYSTRIm(size_t, mstate);
02414 static void     malloc_consolidate(mstate);
02415 static Void_t** iALLOc(size_t, size_t*, int, Void_t**);
02416 #else
02417 static Void_t*  sYSMALLOc();
02418 static int      sYSTRIm();
02419 static void     malloc_consolidate();
02420 static Void_t** iALLOc();
02421 #endif
02422 
02423 /*
02424   Debugging support
02425 
02426   These routines make a number of assertions about the states
02427   of data structures that should be true at all times. If any
02428   are not true, it's very likely that a user program has somehow
02429   trashed memory. (It's also possible that there is a coding error
02430   in malloc. In which case, please report it!)
02431 */
02432 
02433 #if ! DEBUG
02434 
02435 #define check_chunk(P)
02436 #define check_free_chunk(P)
02437 #define check_inuse_chunk(P)
02438 #define check_remalloced_chunk(P,N)
02439 #define check_malloced_chunk(P,N)
02440 #define check_malloc_state()
02441 
02442 #else
02443 #define check_chunk(P)              do_check_chunk(P)
02444 #define check_free_chunk(P)         do_check_free_chunk(P)
02445 #define check_inuse_chunk(P)        do_check_inuse_chunk(P)
02446 #define check_remalloced_chunk(P,N) do_check_remalloced_chunk(P,N)
02447 #define check_malloced_chunk(P,N)   do_check_malloced_chunk(P,N)
02448 #define check_malloc_state()        do_check_malloc_state()
02449 
02450 /*
02451   Properties of all chunks
02452 */
02453 
02454 #if __STD_C
02455 static void do_check_chunk(mchunkptr p)
02456 #else
02457 static void do_check_chunk(p) mchunkptr p;
02458 #endif
02459 {
02460   mstate av = get_malloc_state();
02461   unsigned long sz = chunksize(p);
02462   /* min and max possible addresses assuming contiguous allocation */
02463   char* max_address = (char*)(av->top) + chunksize(av->top);
02464   char* min_address = max_address - av->sbrked_mem;
02465 
02466   if (!chunk_is_mmapped(p)) {
02467     
02468     /* Has legal address ... */
02469     if (p != av->top) {
02470       if (contiguous(av)) {
02471         assert(((char*)p) >= min_address);
02472         assert(((char*)p + sz) <= ((char*)(av->top)));
02473       }
02474     }
02475     else {
02476       /* top size is always at least MINSIZE */
02477       assert((unsigned long)(sz) >= MINSIZE);
02478       /* top predecessor always marked inuse */
02479       assert(prev_inuse(p));
02480     }
02481       
02482   }
02483   else {
02484 #if HAVE_MMAP
02485     /* address is outside main heap  */
02486     if (contiguous(av) && av->top != initial_top(av)) {
02487       assert(((char*)p) < min_address || ((char*)p) > max_address);
02488     }
02489     /* chunk is page-aligned */
02490     assert(((p->prev_size + sz) & (av->pagesize-1)) == 0);
02491     /* mem is aligned */
02492     assert(aligned_OK(chunk2mem(p)));
02493 #else
02494     /* force an appropriate assert violation if debug set */
02495     assert(!chunk_is_mmapped(p));
02496 #endif
02497   }
02498 }
02499 
02500 /*
02501   Properties of free chunks
02502 */
02503 
02504 #if __STD_C
02505 static void do_check_free_chunk(mchunkptr p)
02506 #else
02507 static void do_check_free_chunk(p) mchunkptr p;
02508 #endif
02509 {
02510   mstate av = get_malloc_state();
02511 
02512   INTERNAL_SIZE_T sz = p->size & ~PREV_INUSE;
02513   mchunkptr next = chunk_at_offset(p, sz);
02514 
02515   do_check_chunk(p);
02516 
02517   /* Chunk must claim to be free ... */
02518   assert(!inuse(p));
02519   assert (!chunk_is_mmapped(p));
02520 
02521   /* Unless a special marker, must have OK fields */
02522   if ((unsigned long)(sz) >= MINSIZE)
02523   {
02524     assert((sz & MALLOC_ALIGN_MASK) == 0);
02525     assert(aligned_OK(chunk2mem(p)));
02526     /* ... matching footer field */
02527     assert(next->prev_size == sz);
02528     /* ... and is fully consolidated */
02529     assert(prev_inuse(p));
02530     assert (next == av->top || inuse(next));
02531 
02532     /* ... and has minimally sane links */
02533     assert(p->fd->bk == p);
02534     assert(p->bk->fd == p);
02535   }
02536   else /* markers are always of size SIZE_SZ */
02537     assert(sz == SIZE_SZ);
02538 }
02539 
02540 /*
02541   Properties of inuse chunks
02542 */
02543 
02544 #if __STD_C
02545 static void do_check_inuse_chunk(mchunkptr p)
02546 #else
02547 static void do_check_inuse_chunk(p) mchunkptr p;
02548 #endif
02549 {
02550   mstate av = get_malloc_state();
02551   mchunkptr next;
02552   do_check_chunk(p);
02553 
02554   if (chunk_is_mmapped(p))
02555     return; /* mmapped chunks have no next/prev */
02556 
02557   /* Check whether it claims to be in use ... */
02558   assert(inuse(p));
02559 
02560   next = next_chunk(p);
02561 
02562   /* ... and is surrounded by OK chunks.
02563     Since more things can be checked with free chunks than inuse ones,
02564     if an inuse chunk borders them and debug is on, it's worth doing them.
02565   */
02566   if (!prev_inuse(p))  {
02567     /* Note that we cannot even look at prev unless it is not inuse */
02568     mchunkptr prv = prev_chunk(p);
02569     assert(next_chunk(prv) == p);
02570     do_check_free_chunk(prv);
02571   }
02572 
02573   if (next == av->top) {
02574     assert(prev_inuse(next));
02575     assert(chunksize(next) >= MINSIZE);
02576   }
02577   else if (!inuse(next))
02578     do_check_free_chunk(next);
02579 }
02580 
02581 /*
02582   Properties of chunks recycled from fastbins
02583 */
02584 
02585 #if __STD_C
02586 static void do_check_remalloced_chunk(mchunkptr p, INTERNAL_SIZE_T s)
02587 #else
02588 static void do_check_remalloced_chunk(p, s) mchunkptr p; INTERNAL_SIZE_T s;
02589 #endif
02590 {
02591   INTERNAL_SIZE_T sz = p->size & ~PREV_INUSE;
02592 
02593   do_check_inuse_chunk(p);
02594 
02595   /* Legal size ... */
02596   assert((sz & MALLOC_ALIGN_MASK) == 0);
02597   assert((unsigned long)(sz) >= MINSIZE);
02598   /* ... and alignment */
02599   assert(aligned_OK(chunk2mem(p)));
02600   /* chunk is less than MINSIZE more than request */
02601   assert((long)(sz) - (long)(s) >= 0);
02602   assert((long)(sz) - (long)(s + MINSIZE) < 0);
02603 }
02604 
02605 /*
02606   Properties of nonrecycled chunks at the point they are malloced
02607 */
02608 
02609 #if __STD_C
02610 static void do_check_malloced_chunk(mchunkptr p, INTERNAL_SIZE_T s)
02611 #else
02612 static void do_check_malloced_chunk(p, s) mchunkptr p; INTERNAL_SIZE_T s;
02613 #endif
02614 {
02615   /* same as recycled case ... */
02616   do_check_remalloced_chunk(p, s);
02617 
02618   /*
02619     ... plus,  must obey implementation invariant that prev_inuse is
02620     always true of any allocated chunk; i.e., that each allocated
02621     chunk borders either a previously allocated and still in-use
02622     chunk, or the base of its memory arena. This is ensured
02623     by making all allocations from the the `lowest' part of any found
02624     chunk.  This does not necessarily hold however for chunks
02625     recycled via fastbins.
02626   */
02627 
02628   assert(prev_inuse(p));
02629 }
02630 
02631 
02632 /*
02633   Properties of malloc_state.
02634 
02635   This may be useful for debugging malloc, as well as detecting user
02636   programmer errors that somehow write into malloc_state.
02637 
02638   If you are extending or experimenting with this malloc, you can
02639   probably figure out how to hack this routine to print out or
02640   display chunk addresses, sizes, bins, and other instrumentation.
02641 */
02642 
02643 static void do_check_malloc_state()
02644 {
02645   mstate av = get_malloc_state();
02646   int i;
02647   mchunkptr p;
02648   mchunkptr q;
02649   mbinptr b;
02650   unsigned int binbit;
02651   int empty;
02652   unsigned int idx;
02653   INTERNAL_SIZE_T size;
02654   unsigned long total = 0;
02655   int max_fast_bin;
02656 
02657   /* internal size_t must be no wider than pointer type */
02658   assert(sizeof(INTERNAL_SIZE_T) <= sizeof(char*));
02659 
02660   /* alignment is a power of 2 */
02661   assert((MALLOC_ALIGNMENT & (MALLOC_ALIGNMENT-1)) == 0);
02662 
02663   /* cannot run remaining checks until fully initialized */
02664   if (av->top == 0 || av->top == initial_top(av))
02665     return;
02666 
02667   /* pagesize is a power of 2 */
02668   assert((av->pagesize & (av->pagesize-1)) == 0);
02669 
02670   /* properties of fastbins */
02671 
02672   /* max_fast is in allowed range */
02673   assert((av->max_fast & ~1) <= request2size(MAX_FAST_SIZE));
02674 
02675   max_fast_bin = fastbin_index(av->max_fast);
02676 
02677   for (i = 0; i < NFASTBINS; ++i) {
02678     p = av->fastbins[i];
02679 
02680     /* all bins past max_fast are empty */
02681     if (i > max_fast_bin)
02682       assert(p == 0);
02683 
02684     while (p != 0) {
02685       /* each chunk claims to be inuse */
02686       do_check_inuse_chunk(p);
02687       total += chunksize(p);
02688       /* chunk belongs in this bin */
02689       assert(fastbin_index(chunksize(p)) == i);
02690       p = p->fd;
02691     }
02692   }
02693 
02694   if (total != 0)
02695     assert(have_fastchunks(av));
02696   else if (!have_fastchunks(av))
02697     assert(total == 0);
02698 
02699   /* check normal bins */
02700   for (i = 1; i < NBINS; ++i) {
02701     b = bin_at(av,i);
02702 
02703     /* binmap is accurate (except for bin 1 == unsorted_chunks) */
02704     if (i >= 2) {
02705       binbit = get_binmap(av,i);
02706       empty = last(b) == b;
02707       if (!binbit)
02708         assert(empty);
02709       else if (!empty)
02710         assert(binbit);
02711     }
02712 
02713     for (p = last(b); p != b; p = p->bk) {
02714       /* each chunk claims to be free */
02715       do_check_free_chunk(p);
02716       size = chunksize(p);
02717       total += size;
02718       if (i >= 2) {
02719         /* chunk belongs in bin */
02720         idx = bin_index(size);
02721         assert(idx == i);
02722         /* lists are sorted */
02723         assert(p->bk == b || 
02724                (unsigned long)chunksize(p->bk) >= (unsigned long)chunksize(p));
02725       }
02726       /* chunk is followed by a legal chain of inuse chunks */
02727       for (q = next_chunk(p);
02728            (q != av->top && inuse(q) && 
02729              (unsigned long)(chunksize(q)) >= MINSIZE);
02730            q = next_chunk(q))
02731         do_check_inuse_chunk(q);
02732     }
02733   }
02734 
02735   /* top chunk is OK */
02736   check_chunk(av->top);
02737 
02738   /* sanity checks for statistics */
02739 
02740   assert(total <= (unsigned long)(av->max_total_mem));
02741   assert(av->n_mmaps >= 0);
02742   assert(av->n_mmaps <= av->n_mmaps_max);
02743   assert(av->n_mmaps <= av->max_n_mmaps);
02744 
02745   assert((unsigned long)(av->sbrked_mem) <=
02746          (unsigned long)(av->max_sbrked_mem));
02747 
02748   assert((unsigned long)(av->mmapped_mem) <=
02749          (unsigned long)(av->max_mmapped_mem));
02750 
02751   assert((unsigned long)(av->max_total_mem) >=
02752          (unsigned long)(av->mmapped_mem) + (unsigned long)(av->sbrked_mem));
02753 }
02754 #endif
02755 
02756 
02757 /* ----------- Routines dealing with system allocation -------------- */
02758 
02759 /*
02760   sysmalloc handles malloc cases requiring more memory from the system.
02761   On entry, it is assumed that av->top does not have enough
02762   space to service request for nb bytes, thus requiring that av->top
02763   be extended or replaced.
02764 */
02765 
02766 #if __STD_C
02767 static Void_t* sYSMALLOc(INTERNAL_SIZE_T nb, mstate av)
02768 #else
02769 static Void_t* sYSMALLOc(nb, av) INTERNAL_SIZE_T nb; mstate av;
02770 #endif
02771 {
02772   mchunkptr       old_top;        /* incoming value of av->top */
02773   INTERNAL_SIZE_T old_size;       /* its size */
02774   char*           old_end;        /* its end address */
02775 
02776   long            size;           /* arg to first MORECORE or mmap call */
02777   char*           brk;            /* return value from MORECORE */
02778 
02779   long            correction;     /* arg to 2nd MORECORE call */
02780   char*           snd_brk;        /* 2nd return val */
02781 
02782   INTERNAL_SIZE_T front_misalign; /* unusable bytes at front of new space */
02783   INTERNAL_SIZE_T end_misalign;   /* partial page left at end of new space */
02784   char*           aligned_brk;    /* aligned offset into brk */
02785 
02786   mchunkptr       p;              /* the allocated/returned chunk */
02787   mchunkptr       remainder;      /* remainder from allocation */
02788   unsigned long   remainder_size; /* its size */
02789 
02790   unsigned long   sum;            /* for updating stats */
02791 
02792   size_t          pagemask  = av->pagesize - 1;
02793 
02794 
02795 #if HAVE_MMAP
02796 
02797   /*
02798     If have mmap, and the request size meets the mmap threshold, and
02799     the system supports mmap, and there are few enough currently
02800     allocated mmapped regions, try to directly map this request
02801     rather than expanding top.
02802   */
02803 
02804   if ((unsigned long)(nb) >= (unsigned long)(av->mmap_threshold) &&
02805       (av->n_mmaps < av->n_mmaps_max)) {
02806 
02807     char* mm;             /* return value from mmap call*/
02808 
02809     /*
02810       Round up size to nearest page.  For mmapped chunks, the overhead
02811       is one SIZE_SZ unit larger than for normal chunks, because there
02812       is no following chunk whose prev_size field could be used.
02813     */
02814     size = (nb + SIZE_SZ + MALLOC_ALIGN_MASK + pagemask) & ~pagemask;
02815 
02816     /* Don't try if size wraps around 0 */
02817     if ((unsigned long)(size) > (unsigned long)(nb)) {
02818 
02819 #if linux
02820       // Added by Emery Berger (www.cs.utexas.edu/users/emery)
02821       // Initially suggested by Jim Nance for Hoard (www.hoard.org).
02822       mm = (char*)(MMAP((void *) 0x10000000, size, PROT_READ|PROT_WRITE, MAP_PRIVATE));
02823 #else
02824       mm = (char*)(MMAP(0, size, PROT_READ|PROT_WRITE, MAP_PRIVATE));
02825 #endif
02826 
02827       if (mm != (char*)(MORECORE_FAILURE)) {
02828         
02829         /*
02830           The offset to the start of the mmapped region is stored
02831           in the prev_size field of the chunk. This allows us to adjust
02832           returned start address to meet alignment requirements here 
02833           and in memalign(), and still be able to compute proper
02834           address argument for later munmap in free() and realloc().
02835         */
02836         
02837         front_misalign = (INTERNAL_SIZE_T)chunk2mem(mm) & MALLOC_ALIGN_MASK;
02838         if (front_misalign > 0) {
02839           correction = MALLOC_ALIGNMENT - front_misalign;
02840           p = (mchunkptr)(mm + correction);
02841           p->prev_size = correction;
02842           set_head(p, (size - correction) |IS_MMAPPED);
02843         }
02844         else {
02845           p = (mchunkptr)mm;
02846           set_head(p, size|IS_MMAPPED);
02847         }
02848         
02849         /* update statistics */
02850         
02851         if (++av->n_mmaps > av->max_n_mmaps) 
02852           av->max_n_mmaps = av->n_mmaps;
02853         
02854         sum = av->mmapped_mem += size;
02855         if (sum > (unsigned long)(av->max_mmapped_mem)) 
02856           av->max_mmapped_mem = sum;
02857         sum += av->sbrked_mem;
02858         if (sum > (unsigned long)(av->max_total_mem)) 
02859           av->max_total_mem = sum;
02860 
02861         check_chunk(p);
02862         
02863         return chunk2mem(p);
02864       }
02865     }
02866   }
02867 #endif
02868 
02869   /* Record incoming configuration of top */
02870 
02871   old_top  = av->top;
02872   old_size = chunksize(old_top);
02873   old_end  = (char*)(chunk_at_offset(old_top, old_size));
02874 
02875   brk = snd_brk = (char*)(MORECORE_FAILURE); 
02876 
02877   /* 
02878      If not the first time through, we require old_size to be
02879      at least MINSIZE and to have prev_inuse set.
02880   */
02881 
02882   assert((old_top == initial_top(av) && old_size == 0) || 
02883          ((unsigned long) (old_size) >= MINSIZE &&
02884           prev_inuse(old_top)));
02885 
02886   /* Precondition: not enough current space to satisfy nb request */
02887   assert((unsigned long)(old_size) < (unsigned long)(nb + MINSIZE));
02888 
02889   /* Precondition: all fastbins are consolidated */
02890   assert(!have_fastchunks(av));
02891 
02892 
02893   /* Request enough space for nb + pad + overhead */
02894 
02895   size = nb + av->top_pad + MINSIZE;
02896 
02897   /*
02898     If contiguous, we can subtract out existing space that we hope to
02899     combine with new space. We add it back later only if
02900     we don't actually get contiguous space.
02901   */
02902 
02903   if (contiguous(av))
02904     size -= old_size;
02905 
02906   /*
02907     Round to a multiple of page size.
02908     If MORECORE is not contiguous, this ensures that we only call it
02909     with whole-page arguments.  And if MORECORE is contiguous and
02910     this is not first time through, this preserves page-alignment of
02911     previous calls. Otherwise, we correct to page-align below.
02912   */
02913 
02914   size = (size + pagemask) & ~pagemask;
02915 
02916   /*
02917     Don't try to call MORECORE if argument is so big as to appear
02918     negative. Note that since mmap takes size_t arg, it may succeed
02919     below even if we cannot call MORECORE.
02920   */
02921 
02922   if (size > 0) 
02923     brk = (char*)(MORECORE(size));
02924 
02925   /*
02926     If have mmap, try using it as a backup when MORECORE fails or
02927     cannot be used. This is worth doing on systems that have "holes" in
02928     address space, so sbrk cannot extend to give contiguous space, but
02929     space is available elsewhere.  Note that we ignore mmap max count
02930     and threshold limits, since the space will not be used as a
02931     segregated mmap region.
02932   */
02933 
02934 #if HAVE_MMAP
02935   if (brk == (char*)(MORECORE_FAILURE)) {
02936 
02937     /* Cannot merge with old top, so add its size back in */
02938     if (contiguous(av))
02939       size = (size + old_size + pagemask) & ~pagemask;
02940 
02941     /* If we are relying on mmap as backup, then use larger units */
02942     if ((unsigned long)(size) < (unsigned long)(MMAP_AS_MORECORE_SIZE))
02943       size = MMAP_AS_MORECORE_SIZE;
02944 
02945     /* Don't try if size wraps around 0 */
02946     if ((unsigned long)(size) > (unsigned long)(nb)) {
02947 
02948 #if linux
02949       brk = (char*)(MMAP((void *) 0x10000000, size, PROT_READ|PROT_WRITE, MAP_PRIVATE));
02950 #else
02951       brk = (char*)(MMAP(0, size, PROT_READ|PROT_WRITE, MAP_PRIVATE));
02952 #endif
02953 
02954       if (brk != (char*)(MORECORE_FAILURE)) {
02955         
02956         /* We do not need, and cannot use, another sbrk call to find end */
02957         snd_brk = brk + size;
02958         
02959         /* 
02960            Record that we no longer have a contiguous sbrk region. 
02961            After the first time mmap is used as backup, we do not
02962            ever rely on contiguous space since this could incorrectly
02963            bridge regions.
02964         */
02965         set_noncontiguous(av);
02966       }
02967     }
02968   }
02969 #endif
02970 
02971   if (brk != (char*)(MORECORE_FAILURE)) {
02972     av->sbrked_mem += size;
02973 
02974     /*
02975       If MORECORE extends previous space, we can likewise extend top size.
02976     */
02977     
02978     if (brk == old_end && snd_brk == (char*)(MORECORE_FAILURE)) {
02979       set_head(old_top, (size + old_size) | PREV_INUSE);
02980     }
02981     
02982     /*
02983       Otherwise, make adjustments:
02984       
02985       * If the first time through or noncontiguous, we need to call sbrk
02986         just to find out where the end of memory lies.
02987 
02988       * We need to ensure that all returned chunks from malloc will meet
02989         MALLOC_ALIGNMENT
02990 
02991       * If there was an intervening foreign sbrk, we need to adjust sbrk
02992         request size to account for fact that we will not be able to
02993         combine new space with existing space in old_top.
02994 
02995       * Almost all systems internally allocate whole pages at a time, in
02996         which case we might as well use the whole last page of request.
02997         So we allocate enough more memory to hit a page boundary now,
02998         which in turn causes future contiguous calls to page-align.
02999     */
03000     
03001     else {
03002       front_misalign = 0;
03003       end_misalign = 0;
03004       correction = 0;
03005       aligned_brk = brk;
03006       
03007       /* handle contiguous cases */
03008       if (contiguous(av)) { 
03009         
03010         /* Guarantee alignment of first new chunk made from this space */
03011 
03012         front_misalign = (INTERNAL_SIZE_T)chunk2mem(brk) & MALLOC_ALIGN_MASK;
03013         if (front_misalign > 0) {
03014 
03015           /*
03016             Skip over some bytes to arrive at an aligned position.
03017             We don't need to specially mark these wasted front bytes.
03018             They will never be accessed anyway because
03019             prev_inuse of av->top (and any chunk created from its start)
03020             is always true after initialization.
03021           */
03022 
03023           correction = MALLOC_ALIGNMENT - front_misalign;
03024           aligned_brk += correction;
03025         }
03026         
03027         /*
03028           If this isn't adjacent to existing space, then we will not
03029           be able to merge with old_top space, so must add to 2nd request.
03030         */
03031         
03032         correction += old_size;
03033         
03034         /* Extend the end address to hit a page boundary */
03035         end_misalign = (INTERNAL_SIZE_T)(brk + size + correction);
03036         correction += ((end_misalign + pagemask) & ~pagemask) - end_misalign;
03037         
03038         assert(correction >= 0);
03039         snd_brk = (char*)(MORECORE(correction));
03040         
03041         /*
03042           If can't allocate correction, try to at least find out current
03043           brk.  It might be enough to proceed without failing.
03044           
03045           Note that if second sbrk did NOT fail, we assume that space
03046           is contiguous with first sbrk. This is a safe assumption unless
03047           program is multithreaded but doesn't use locks and a foreign sbrk
03048           occurred between our first and second calls.
03049         */
03050         
03051         if (snd_brk == (char*)(MORECORE_FAILURE)) {
03052           correction = 0;
03053           snd_brk = (char*)(MORECORE(0));
03054         }
03055       }
03056       
03057       /* handle non-contiguous cases */
03058       else { 
03059         /* MORECORE/mmap must correctly align */
03060         assert(((unsigned long)chunk2mem(brk) & MALLOC_ALIGN_MASK) == 0);
03061         
03062         /* Find out current end of memory */
03063         if (snd_brk == (char*)(MORECORE_FAILURE)) {
03064           snd_brk = (char*)(MORECORE(0));
03065         }
03066       }
03067       
03068       /* Adjust top based on results of second sbrk */
03069       if (snd_brk != (char*)(MORECORE_FAILURE)) {
03070         av->top = (mchunkptr)aligned_brk;
03071         set_head(av->top, (snd_brk - aligned_brk + correction) | PREV_INUSE);
03072         av->sbrked_mem += correction;
03073      
03074         /*
03075           If not the first time through, we either have a
03076           gap due to foreign sbrk or a non-contiguous region.  Insert a
03077           double fencepost at old_top to prevent consolidation with space
03078           we don't own. These fenceposts are artificial chunks that are
03079           marked as inuse and are in any case too small to use.  We need
03080           two to make sizes and alignments work out.
03081         */
03082    
03083         if (old_size != 0) {
03084           /* 
03085              Shrink old_top to insert fenceposts, keeping size a
03086              multiple of MALLOC_ALIGNMENT. We know there is at least
03087              enough space in old_top to do this.
03088           */
03089           old_size = (old_size - 3*SIZE_SZ) & ~MALLOC_ALIGN_MASK;
03090           set_head(old_top, old_size | PREV_INUSE);
03091           
03092           /*
03093             Note that the following assignments completely overwrite
03094             old_top when old_size was previously MINSIZE.  This is
03095             intentional. We need the fencepost, even if old_top otherwise gets
03096             lost.
03097           */
03098           chunk_at_offset(old_top, old_size          )->size =
03099             SIZE_SZ|PREV_INUSE;
03100 
03101           chunk_at_offset(old_top, old_size + SIZE_SZ)->size =
03102             SIZE_SZ|PREV_INUSE;
03103 
03104           /* If possible, release the rest. */
03105           if (old_size >= MINSIZE) {
03106             fREe(chunk2mem(old_top));
03107           }
03108 
03109         }
03110       }
03111     }
03112     
03113     /* Update statistics */
03114     sum = av->sbrked_mem;
03115     if (sum > (unsigned long)(av->max_sbrked_mem))
03116       av->max_sbrked_mem = sum;
03117     
03118     sum += av->mmapped_mem;
03119     if (sum > (unsigned long)(av->max_total_mem))
03120       av->max_total_mem = sum;
03121 
03122     check_malloc_state();
03123     
03124     /* finally, do the allocation */
03125     p = av->top;
03126     size = chunksize(p);
03127     
03128     /* check that one of the above allocation paths succeeded */
03129     if ((unsigned long)(size) >= (unsigned long)(nb + MINSIZE)) {
03130       remainder_size = size - nb;
03131       remainder = chunk_at_offset(p, nb);
03132       av->top = remainder;
03133       set_head(p, nb | PREV_INUSE);
03134       set_head(remainder, remainder_size | PREV_INUSE);
03135       check_malloced_chunk(p, nb);
03136       return chunk2mem(p);
03137     }
03138   }
03139 
03140   /* catch all failure paths */
03141   MALLOC_FAILURE_ACTION;
03142   return 0;
03143 }
03144 
03145 
03146 /*
03147   sYSTRIm is an inverse of sorts to sYSMALLOc.  It gives memory back
03148   to the system (via negative arguments to sbrk) if there is unused
03149   memory at the `high' end of the malloc pool. It is called
03150   automatically by free() when top space exceeds the trim
03151   threshold. It is also called by the public malloc_trim routine.  It
03152   returns 1 if it actually released any memory, else 0.
03153 */
03154 
03155 #if __STD_C
03156 static int sYSTRIm(size_t pad, mstate av)
03157 #else
03158 static int sYSTRIm(pad, av) size_t pad; mstate av;
03159 #endif
03160 {
03161   long  top_size;        /* Amount of top-most memory */
03162   long  extra;           /* Amount to release */
03163   long  released;        /* Amount actually released */
03164   char* current_brk;     /* address returned by pre-check sbrk call */
03165   char* new_brk;         /* address returned by post-check sbrk call */
03166   size_t pagesz;
03167 
03168   pagesz = av->pagesize;
03169   top_size = chunksize(av->top);
03170   
03171   /* Release in pagesize units, keeping at least one page */
03172   extra = ((top_size - pad - MINSIZE + (pagesz-1)) / pagesz - 1) * pagesz;
03173   
03174   if (extra > 0) {
03175     
03176     /*
03177       Only proceed if end of memory is where we last set it.
03178       This avoids problems if there were foreign sbrk calls.
03179     */
03180     current_brk = (char*)(MORECORE(0));
03181     if (current_brk == (char*)(av->top) + top_size) {
03182       
03183       /*
03184         Attempt to release memory. We ignore MORECORE return value,
03185         and instead call again to find out where new end of memory is.
03186         This avoids problems if first call releases less than we asked,
03187         of if failure somehow altered brk value. (We could still
03188         encounter problems if it altered brk in some very bad way,
03189         but the only thing we can do is adjust anyway, which will cause
03190         some downstream failure.)
03191       */
03192       
03193       MORECORE(-extra);
03194       new_brk = (char*)(MORECORE(0));
03195       
03196       if (new_brk != (char*)MORECORE_FAILURE) {
03197         released = (long)(current_brk - new_brk);
03198         
03199         if (released != 0) {
03200           /* Success. Adjust top. */
03201           av->sbrked_mem -= released;
03202           set_head(av->top, (top_size - released) | PREV_INUSE);
03203           check_malloc_state();
03204           return 1;
03205         }
03206       }
03207     }
03208   }
03209   return 0;
03210 }
03211 
03212 /*
03213   ------------------------------ malloc ------------------------------
03214 */
03215 
03216 #if __STD_C
03217 Void_t* mALLOc(size_t bytes)
03218 #else
03219   Void_t* mALLOc(bytes) size_t bytes;
03220 #endif
03221 {
03222   mstate av = get_malloc_state();
03223 
03224   INTERNAL_SIZE_T nb;               /* normalized request size */
03225   unsigned int    idx;              /* associated bin index */
03226   mbinptr         bin;              /* associated bin */
03227   mfastbinptr*    fb;               /* associated fastbin */
03228 
03229   mchunkptr       victim;           /* inspected/selected chunk */
03230   INTERNAL_SIZE_T size;             /* its size */
03231   int             victim_index;     /* its bin index */
03232 
03233   mchunkptr       remainder;        /* remainder from a split */
03234   unsigned long   remainder_size;   /* its size */
03235 
03236   unsigned int    block;            /* bit map traverser */
03237   unsigned int    bit;              /* bit map traverser */
03238   unsigned int    map;              /* current word of binmap */
03239 
03240   mchunkptr       fwd;              /* misc temp for linking */
03241   mchunkptr       bck;              /* misc temp for linking */
03242 
03243   /*
03244     Convert request size to internal form by adding SIZE_SZ bytes
03245     overhead plus possibly more to obtain necessary alignment and/or
03246     to obtain a size of at least MINSIZE, the smallest allocatable
03247     size. Also, checked_request2size traps (returning 0) request sizes
03248     that are so large that they wrap around zero when padded and
03249     aligned.
03250   */
03251 
03252   checked_request2size(bytes, nb);
03253 
03254   /*
03255     If the size qualifies as a fastbin, first check corresponding bin.
03256     This code is safe to execute even if av is not yet initialized, so we
03257     can try it without checking, which saves some time on this fast path.
03258   */
03259 
03260   if ((unsigned long)(nb) <= (unsigned long)(av->max_fast)) { 
03261     fb = &(av->fastbins[(fastbin_index(nb))]);
03262     if ( (victim = *fb) != 0) {
03263       *fb = victim->fd;
03264       check_remalloced_chunk(victim, nb);
03265       return chunk2mem(victim);
03266     }
03267   }
03268 
03269   /*
03270     If a small request, check regular bin.  Since these "smallbins"
03271     hold one size each, no searching within bins is necessary.
03272     (For a large request, we need to wait until unsorted chunks are
03273     processed to find best fit. But for small ones, fits are exact
03274     anyway, so we can check now, which is faster.)
03275   */
03276 
03277   if (in_smallbin_range(nb)) {
03278     idx = smallbin_index(nb);
03279     bin = bin_at(av,idx);
03280 
03281     if ( (victim = last(bin)) != bin) {
03282       if (victim == 0) /* initialization check */
03283         malloc_consolidate(av);
03284       else {
03285         bck = victim->bk;
03286         set_inuse_bit_at_offset(victim, nb);
03287         bin->bk = bck;
03288         bck->fd = bin;
03289         
03290         check_malloced_chunk(victim, nb);
03291         return chunk2mem(victim);
03292       }
03293     }
03294   }
03295 
03296   /* 
03297      If this is a large request, consolidate fastbins before continuing.
03298      While it might look excessive to kill all fastbins before
03299      even seeing if there is space available, this avoids
03300      fragmentation problems normally associated with fastbins.
03301      Also, in practice, programs tend to have runs of either small or
03302      large requests, but less often mixtures, so consolidation is not 
03303      invoked all that often in most programs. And the programs that
03304      it is called frequently in otherwise tend to fragment.
03305   */
03306 
03307   else {
03308     idx = largebin_index(nb);
03309     if (have_fastchunks(av)) 
03310       malloc_consolidate(av);
03311   }
03312 
03313   /*
03314     Process recently freed or remaindered chunks, taking one only if
03315     it is exact fit, or, if this a small request, the chunk is remainder from
03316     the most recent non-exact fit.  Place other traversed chunks in
03317     bins.  Note that this step is the only place in any routine where
03318     chunks are placed in bins.
03319 
03320     The outer loop here is needed because we might not realize until
03321     near the end of malloc that we should have consolidated, so must
03322     do so and retry. This happens at most once, and only when we would
03323     otherwise need to expand memory to service a "small" request.
03324   */
03325     
03326   for(;;) {    
03327     
03328     while ( (victim = unsorted_chunks(av)->bk) != unsorted_chunks(av)) {
03329       bck = victim->bk;
03330       size = chunksize(victim);
03331 
03332       /* 
03333          If a small request, try to use last remainder if it is the
03334          only chunk in unsorted bin.  This helps promote locality for
03335          runs of consecutive small requests. This is the only
03336          exception to best-fit, and applies only when there is
03337          no exact fit for a small chunk.
03338       */
03339 
03340       if (in_smallbin_range(nb) && 
03341           bck == unsorted_chunks(av) &&
03342           victim == av->last_remainder &&
03343           (unsigned long)(size) > (unsigned long)(nb + MINSIZE)) {
03344 
03345         /* split and reattach remainder */
03346         remainder_size = size - nb;
03347         remainder = chunk_at_offset(victim, nb);
03348         unsorted_chunks(av)->bk = unsorted_chunks(av)->fd = remainder;
03349         av->last_remainder = remainder; 
03350         remainder->bk = remainder->fd = unsorted_chunks(av);
03351         
03352         set_head(victim, nb | PREV_INUSE);
03353         set_head(remainder, remainder_size | PREV_INUSE);
03354         set_foot(remainder, remainder_size);
03355         
03356         check_malloced_chunk(victim, nb);
03357         return chunk2mem(victim);
03358       }
03359 
03360       /* remove from unsorted list */
03361       unsorted_chunks(av)->bk = bck;
03362       bck->fd = unsorted_chunks(av);
03363       
03364       /* Take now instead of binning if exact fit */
03365       
03366       if (size == nb) {
03367         set_inuse_bit_at_offset(victim, size);
03368         check_malloced_chunk(victim, nb);
03369         return chunk2mem(victim);
03370       }
03371       
03372       /* place chunk in bin */
03373       
03374       if (in_smallbin_range(size)) {
03375         victim_index = smallbin_index(size);
03376         bck = bin_at(av, victim_index);
03377         fwd = bck->fd;
03378       }
03379       else {
03380         victim_index = largebin_index(size);
03381         bck = bin_at(av, victim_index);
03382         fwd = bck->fd;
03383 
03384         /* maintain large bins in sorted order */
03385         if (fwd != bck) {
03386           size |= PREV_INUSE; /* Or with inuse bit to speed comparisons */
03387           /* if smaller than smallest, bypass loop below */
03388           if ((unsigned long)(size) <= (unsigned long)(bck->bk->size)) {
03389             fwd = bck;
03390             bck = bck->bk;
03391           }
03392           else {
03393             while ((unsigned long)(size) < (unsigned long)(fwd->size)) 
03394               fwd = fwd->fd;
03395             bck = fwd->bk;
03396           }
03397         }
03398       }
03399       
03400       mark_bin(av, victim_index);
03401       victim->bk = bck;
03402       victim->fd = fwd;
03403       fwd->bk = victim;
03404       bck->fd = victim;
03405     }
03406    
03407     /*
03408       If a large request, scan through the chunks of current bin in
03409       sorted order to find smallest that fits.  This is the only step
03410       where an unbounded number of chunks might be scanned without doing
03411       anything useful with them. However the lists tend to be short.
03412     */
03413       
03414     if (!in_smallbin_range(nb)) {
03415       bin = bin_at(av, idx);
03416 
03417       /* skip scan if empty or largest chunk is too small */
03418       if ((victim = last(bin)) != bin &&
03419           (unsigned long)(first(bin)->size) >= (unsigned long)(nb)) {
03420 
03421         while (((unsigned long)(size = chunksize(victim)) < 
03422                 (unsigned long)(nb)))
03423           victim = victim->bk;
03424 
03425         remainder_size = size - nb;
03426         unlink(victim, bck, fwd);
03427         
03428         /* Exhaust */
03429         if (remainder_size < MINSIZE)  {
03430           set_inuse_bit_at_offset(victim, size);
03431           check_malloced_chunk(victim, nb);
03432           return chunk2mem(victim);
03433         }
03434         /* Split */
03435         else {
03436           remainder = chunk_at_offset(victim, nb);
03437           unsorted_chunks(av)->bk = unsorted_chunks(av)->fd = remainder;
03438           remainder->bk = remainder->fd = unsorted_chunks(av);
03439           set_head(victim, nb | PREV_INUSE);
03440           set_head(remainder, remainder_size | PREV_INUSE);
03441           set_foot(remainder, remainder_size);
03442           check_malloced_chunk(victim, nb);
03443           return chunk2mem(victim);
03444         } 
03445       }
03446     }    
03447 
03448     /*
03449       Search for a chunk by scanning bins, starting with next largest
03450       bin. This search is strictly by best-fit; i.e., the smallest
03451       (with ties going to approximately the least recently used) chunk
03452       that fits is selected.
03453       
03454       The bitmap avoids needing to check that most blocks are nonempty.
03455       The particular case of skipping all bins during warm-up phases
03456       when no chunks have been returned yet is faster than it might look.
03457     */
03458     
03459     ++idx;
03460     bin = bin_at(av,idx);
03461     block = idx2block(idx);
03462     map = av->binmap[block];
03463     bit = idx2bit(idx);
03464     
03465     for (;;) {
03466 
03467       /* Skip rest of block if there are no more set bits in this block.  */
03468       if (bit > map || bit == 0) {
03469         do {
03470           if (++block >= BINMAPSIZE)  /* out of bins */
03471             goto use_top;
03472         } while ( (map = av->binmap[block]) == 0);
03473 
03474         bin = bin_at(av, (block << BINMAPSHIFT));
03475         bit = 1;
03476       }
03477       
03478       /* Advance to bin with set bit. There must be one. */
03479       while ((bit & map) == 0) {
03480         bin = next_bin(bin);
03481         bit <<= 1;
03482         assert(bit != 0);
03483       }
03484       
03485       /* Inspect the bin. It is likely to be non-empty */
03486       victim = last(bin);
03487       
03488       /*  If a false alarm (empty bin), clear the bit. */
03489       if (victim == bin) {
03490         av->binmap[block] = map &= ~bit; /* Write through */
03491         bin = next_bin(bin);
03492         bit <<= 1;
03493       }
03494       
03495       else {
03496         size = chunksize(victim);
03497 
03498         /*  We know the first chunk in this bin is big enough to use. */
03499         assert((unsigned long)(size) >= (unsigned long)(nb));
03500 
03501         remainder_size = size - nb;
03502         
03503         /* unlink */
03504         bck = victim->bk;
03505         bin->bk = bck;
03506         bck->fd = bin;
03507         
03508         /* Exhaust */
03509         if (remainder_size < MINSIZE) {
03510           set_inuse_bit_at_offset(victim, size);
03511           check_malloced_chunk(victim, nb);
03512           return chunk2mem(victim);
03513         }
03514         
03515         /* Split */
03516         else {
03517           remainder = chunk_at_offset(victim, nb);
03518           
03519           unsorted_chunks(av)->bk = unsorted_chunks(av)->fd = remainder;
03520           remainder->bk = remainder->fd = unsorted_chunks(av);
03521           /* advertise as last remainder */
03522           if (in_smallbin_range(nb)) 
03523             av->last_remainder = remainder; 
03524           
03525           set_head(victim, nb | PREV_INUSE);
03526           set_head(remainder, remainder_size | PREV_INUSE);
03527           set_foot(remainder, remainder_size);
03528           check_malloced_chunk(victim, nb);
03529           return chunk2mem(victim);
03530         }
03531       }
03532     }
03533 
03534   use_top:    
03535     /*
03536       If large enough, split off the chunk bordering the end of memory
03537       (held in av->top). Note that this is in accord with the best-fit
03538       search rule.  In effect, av->top is treated as larger (and thus
03539       less well fitting) than any other available chunk since it can
03540       be extended to be as large as necessary (up to system
03541       limitations).
03542 
03543       We require that av->top always exists (i.e., has size >=
03544       MINSIZE) after initialization, so if it would otherwise be
03545       exhuasted by current request, it is replenished. (The main
03546       reason for ensuring it exists is that we may need MINSIZE space
03547       to put in fenceposts in sysmalloc.)
03548     */
03549 
03550     victim = av->top;
03551     size = chunksize(victim);
03552    
03553     if ((unsigned long)(size) >= (unsigned long)(nb + MINSIZE)) {
03554       remainder_size = size - nb;
03555       remainder = chunk_at_offset(victim, nb);
03556       av->top = remainder;
03557       set_head(victim, nb | PREV_INUSE);
03558       set_head(remainder, remainder_size | PREV_INUSE);
03559 
03560       check_malloced_chunk(victim, nb);
03561       return chunk2mem(victim);
03562     }
03563 
03564     /*
03565       If there is space available in fastbins, consolidate and retry,
03566       to possibly avoid expanding memory. This can occur only if nb is
03567       in smallbin range so we didn't consolidate upon entry.
03568     */
03569 
03570     else if (have_fastchunks(av)) {
03571       assert(in_smallbin_range(nb));
03572       malloc_consolidate(av);
03573       idx = smallbin_index(nb); /* restore original bin index */
03574     }
03575 
03576     /* 
03577        Otherwise, relay to handle system-dependent cases 
03578     */
03579     else 
03580       return sYSMALLOc(nb, av);    
03581   }
03582 }
03583 
03584 /*
03585   ------------------------------ free ------------------------------
03586 */
03587 
03588 #if __STD_C
03589 void fREe(Void_t* mem)
03590 #else
03591 void fREe(mem) Void_t* mem;
03592 #endif
03593 {
03594   mstate av = get_malloc_state();
03595 
03596   mchunkptr       p;           /* chunk corresponding to mem */
03597   INTERNAL_SIZE_T size;        /* its size */
03598   mfastbinptr*    fb;          /* associated fastbin */
03599   mchunkptr       nextchunk;   /* next contiguous chunk */
03600   INTERNAL_SIZE_T nextsize;    /* its size */
03601   int             nextinuse;   /* true if nextchunk is used */
03602   INTERNAL_SIZE_T prevsize;    /* size of previous contiguous chunk */
03603   mchunkptr       bck;         /* misc temp for linking */
03604   mchunkptr       fwd;         /* misc temp for linking */
03605 
03606 
03607   /* free(0) has no effect */
03608   if (mem != 0) {
03609     p = mem2chunk(mem);
03610     size = chunksize(p);
03611 
03612     check_inuse_chunk(p);
03613 
03614     /*
03615       If eligible, place chunk on a fastbin so it can be found
03616       and used quickly in malloc.
03617     */
03618 
03619     if ((unsigned long)(size) <= (unsigned long)(av->max_fast)
03620 
03621 #if TRIM_FASTBINS
03622         /* 
03623            If TRIM_FASTBINS set, don't place chunks
03624            bordering top into fastbins
03625         */
03626         && (chunk_at_offset(p, size) != av->top)
03627 #endif
03628         ) {
03629 
03630       set_fastchunks(av);
03631       fb = &(av->fastbins[fastbin_index(size)]);
03632       p->fd = *fb;
03633       *fb = p;
03634     }
03635 
03636     /*
03637        Consolidate other non-mmapped chunks as they arrive.
03638     */
03639 
03640     else if (!chunk_is_mmapped(p)) {
03641       nextchunk = chunk_at_offset(p, size);
03642       nextsize = chunksize(nextchunk);
03643 
03644       /* consolidate backward */
03645       if (!prev_inuse(p)) {
03646         prevsize = p->prev_size;
03647         size += prevsize;
03648         p = chunk_at_offset(p, -((long) prevsize));
03649         unlink(p, bck, fwd);
03650       }
03651 
03652       if (nextchunk != av->top) {
03653         /* get and clear inuse bit */
03654         nextinuse = inuse_bit_at_offset(nextchunk, nextsize);
03655         set_head(nextchunk, nextsize);
03656 
03657         /* consolidate forward */
03658         if (!nextinuse) {
03659           unlink(nextchunk, bck, fwd);
03660           size += nextsize;
03661         }
03662 
03663         /*
03664           Place the chunk in unsorted chunk list. Chunks are
03665           not placed into regular bins until after they have
03666           been given one chance to be used in malloc.
03667         */
03668 
03669         bck = unsorted_chunks(av);
03670         fwd = bck->fd;
03671         p->bk = bck;
03672         p->fd = fwd;
03673         bck->fd = p;
03674         fwd->bk = p;
03675 
03676         set_head(p, size | PREV_INUSE);
03677         set_foot(p, size);
03678         
03679         check_free_chunk(p);
03680       }
03681 
03682       /*
03683          If the chunk borders the current high end of memory,
03684          consolidate into top
03685       */
03686 
03687       else {
03688         size += nextsize;
03689         set_head(p, size | PREV_INUSE);
03690         av->top = p;
03691         check_chunk(p);
03692       }
03693 
03694       /*
03695         If freeing a large space, consolidate possibly-surrounding
03696         chunks. Then, if the total unused topmost memory exceeds trim
03697         threshold, ask malloc_trim to reduce top.
03698 
03699         Unless max_fast is 0, we don't know if there are fastbins
03700         bordering top, so we cannot tell for sure whether threshold
03701         has been reached unless fastbins are consolidated.  But we
03702         don't want to consolidate on each free.  As a compromise,
03703         consolidation is performed if FASTBIN_CONSOLIDATION_THRESHOLD
03704         is reached.
03705       */
03706 
03707       if ((unsigned long)(size) >= FASTBIN_CONSOLIDATION_THRESHOLD) { 
03708         if (have_fastchunks(av)) 
03709           malloc_consolidate(av);
03710 
03711 #ifndef MORECORE_CANNOT_TRIM        
03712         if ((unsigned long)(chunksize(av->top)) >= 
03713             (unsigned long)(av->trim_threshold)) 
03714           sYSTRIm(av->top_pad, av);
03715 #endif
03716       }
03717 
03718     }
03719     /*
03720       If the chunk was allocated via mmap, release via munmap()
03721       Note that if HAVE_MMAP is false but chunk_is_mmapped is
03722       true, then user must have overwritten memory. There's nothing
03723       we can do to catch this error unless DEBUG is set, in which case
03724       check_inuse_chunk (above) will have triggered error.
03725     */
03726 
03727     else {
03728 #if HAVE_MMAP
03729       int ret;
03730       INTERNAL_SIZE_T offset = p->prev_size;
03731       av->n_mmaps--;
03732       av->mmapped_mem -= (size + offset);
03733       ret = munmap((char*)p - offset, size + offset);
03734       /* munmap returns non-zero on failure */
03735       assert(ret == 0);
03736 #endif
03737     }
03738   }
03739 }
03740 
03741 /*
03742   ------------------------- malloc_consolidate -------------------------
03743 
03744   malloc_consolidate is a specialized version of free() that tears
03745   down chunks held in fastbins.  Free itself cannot be used for this
03746   purpose since, among other things, it might place chunks back onto
03747   fastbins.  So, instead, we need to use a minor variant of the same
03748   code.
03749   
03750   Also, because this routine needs to be called the first time through
03751   malloc anyway, it turns out to be the perfect place to trigger
03752   initialization code.
03753 */
03754 
03755 #if __STD_C
03756 static void malloc_consolidate(mstate av)
03757 #else
03758 static void malloc_consolidate(av) mstate av;
03759 #endif
03760 {
03761   mfastbinptr*    fb;                 /* current fastbin being consolidated */
03762   mfastbinptr*    maxfb;              /* last fastbin (for loop control) */
03763   mchunkptr       p;                  /* current chunk being consolidated */
03764   mchunkptr       nextp;              /* next chunk to consolidate */
03765   mchunkptr       unsorted_bin;       /* bin header */
03766   mchunkptr       first_unsorted;     /* chunk to link to */
03767 
03768   /* These have same use as in free() */
03769   mchunkptr       nextchunk;
03770   INTERNAL_SIZE_T size;
03771   INTERNAL_SIZE_T nextsize;
03772   INTERNAL_SIZE_T prevsize;
03773   int             nextinuse;
03774   mchunkptr       bck;
03775   mchunkptr       fwd;
03776 
03777   /*
03778     If max_fast is 0, we know that av hasn't
03779     yet been initialized, in which case do so below
03780   */
03781 
03782   if (av->max_fast != 0) {
03783     clear_fastchunks(av);
03784 
03785     unsorted_bin = unsorted_chunks(av);
03786 
03787     /*
03788       Remove each chunk from fast bin and consolidate it, placing it
03789       then in unsorted bin. Among other reasons for doing this,
03790       placing in unsorted bin avoids needing to calculate actual bins
03791       until malloc is sure that chunks aren't immediately going to be
03792       reused anyway.
03793     */
03794     
03795     maxfb = &(av->fastbins[fastbin_index(av->max_fast)]);
03796     fb = &(av->fastbins[0]);
03797     do {
03798       if ( (p = *fb) != 0) {
03799         *fb = 0;
03800         
03801         do {
03802           check_inuse_chunk(p);
03803           nextp = p->fd;
03804           
03805           /* Slightly streamlined version of consolidation code in free() */
03806           size = p->size & ~PREV_INUSE;
03807           nextchunk = chunk_at_offset(p, size);
03808           nextsize = chunksize(nextchunk);
03809           
03810           if (!prev_inuse(p)) {
03811             prevsize = p->prev_size;
03812             size += prevsize;
03813             p = chunk_at_offset(p, -((long) prevsize));
03814             unlink(p, bck, fwd);
03815           }
03816           
03817           if (nextchunk != av->top) {
03818             nextinuse = inuse_bit_at_offset(nextchunk, nextsize);
03819             set_head(nextchunk, nextsize);
03820             
03821             if (!nextinuse) {
03822               size += nextsize;
03823               unlink(nextchunk, bck, fwd);
03824             }
03825             
03826             first_unsorted = unsorted_bin->fd;
03827             unsorted_bin->fd = p;
03828             first_unsorted->bk = p;
03829             
03830             set_head(p, size | PREV_INUSE);
03831             p->bk = unsorted_bin;
03832             p->fd = first_unsorted;
03833             set_foot(p, size);
03834           }
03835           
03836           else {
03837             size += nextsize;
03838             set_head(p, size | PREV_INUSE);
03839             av->top = p;
03840           }
03841           
03842         } while ( (p = nextp) != 0);
03843         
03844       }
03845     } while (fb++ != maxfb);
03846   }
03847   else {
03848     malloc_init_state(av);
03849     check_malloc_state();
03850   }
03851 }
03852 
03853 /*
03854   ------------------------------ realloc ------------------------------
03855 */
03856 
03857 
03858 #if __STD_C
03859 Void_t* rEALLOc(Void_t* oldmem, size_t bytes)
03860 #else
03861 Void_t* rEALLOc(oldmem, bytes) Void_t* oldmem; size_t bytes;
03862 #endif
03863 {
03864   mstate av = get_malloc_state();
03865 
03866   INTERNAL_SIZE_T  nb;              /* padded request size */
03867 
03868   mchunkptr        oldp;            /* chunk corresponding to oldmem */
03869   INTERNAL_SIZE_T  oldsize;         /* its size */
03870 
03871   mchunkptr        newp;            /* chunk to return */
03872   INTERNAL_SIZE_T  newsize;         /* its size */
03873   Void_t*          newmem;          /* corresponding user mem */
03874 
03875   mchunkptr        next;            /* next contiguous chunk after oldp */
03876 
03877   mchunkptr        remainder;       /* extra space at end of newp */
03878   unsigned long    remainder_size;  /* its size */
03879 
03880   mchunkptr        bck;             /* misc temp for linking */
03881   mchunkptr        fwd;             /* misc temp for linking */
03882 
03883   unsigned long    copysize;        /* bytes to copy */
03884   unsigned int     ncopies;         /* INTERNAL_SIZE_T words to copy */
03885   INTERNAL_SIZE_T* s;               /* copy source */ 
03886   INTERNAL_SIZE_T* d;               /* copy destination */
03887 
03888 
03889 #ifdef REALLOC_ZERO_BYTES_FREES
03890   if (bytes == 0) {
03891     fREe(oldmem);
03892     return 0;
03893   }
03894 #endif
03895 
03896   /* realloc of null is supposed to be same as malloc */
03897   if (oldmem == 0) return mALLOc(bytes);
03898 
03899   checked_request2size(bytes, nb);
03900 
03901   oldp    = mem2chunk(oldmem);
03902   oldsize = chunksize(oldp);
03903 
03904   check_inuse_chunk(oldp);
03905 
03906   if (!chunk_is_mmapped(oldp)) {
03907 
03908     if ((unsigned long)(oldsize) >= (unsigned long)(nb)) {
03909       /* already big enough; split below */
03910       newp = oldp;
03911       newsize = oldsize;
03912     }
03913 
03914     else {
03915       next = chunk_at_offset(oldp, oldsize);
03916 
03917       /* Try to expand forward into top */
03918       if (next == av->top &&
03919           (unsigned long)(newsize = oldsize + chunksize(next)) >=
03920           (unsigned long)(nb + MINSIZE)) {
03921         set_head_size(oldp, nb);
03922         av->top = chunk_at_offset(oldp, nb);
03923         set_head(av->top, (newsize - nb) | PREV_INUSE);
03924         return chunk2mem(oldp);
03925       }
03926       
03927       /* Try to expand forward into next chunk;  split off remainder below */
03928       else if (next != av->top && 
03929                !inuse(next) &&
03930                (unsigned long)(newsize = oldsize + chunksize(next)) >=
03931                (unsigned long)(nb)) {
03932         newp = oldp;
03933         unlink(next, bck, fwd);
03934       }
03935 
03936       /* allocate, copy, free */
03937       else {
03938         newmem = mALLOc(nb - MALLOC_ALIGN_MASK);
03939         if (newmem == 0)
03940           return 0; /* propagate failure */
03941       
03942         newp = mem2chunk(newmem);
03943         newsize = chunksize(newp);
03944         
03945         /*
03946           Avoid copy if newp is next chunk after oldp.
03947         */
03948         if (newp == next) {
03949           newsize += oldsize;
03950           newp = oldp;
03951         }
03952         else {
03953           /*
03954             Unroll copy of <= 36 bytes (72 if 8byte sizes)
03955             We know that contents have an odd number of
03956             INTERNAL_SIZE_T-sized words; minimally 3.
03957           */
03958           
03959           copysize = oldsize - SIZE_SZ;
03960           s = (INTERNAL_SIZE_T*)(oldmem);
03961           d = (INTERNAL_SIZE_T*)(newmem);
03962           ncopies = copysize / sizeof(INTERNAL_SIZE_T);
03963           assert(ncopies >= 3);
03964           
03965           if (ncopies > 9)
03966             MALLOC_COPY(d, s, copysize);
03967           
03968           else {
03969             *(d+0) = *(s+0);
03970             *(d+1) = *(s+1);
03971             *(d+2) = *(s+2);
03972             if (ncopies > 4) {
03973               *(d+3) = *(s+3);
03974               *(d+4) = *(s+4);
03975               if (ncopies > 6) {
03976                 *(d+5) = *(s+5);
03977                 *(d+6) = *(s+6);
03978                 if (ncopies > 8) {
03979                   *(d+7) = *(s+7);
03980                   *(d+8) = *(s+8);
03981                 }
03982               }
03983             }
03984           }
03985           
03986           fREe(oldmem);
03987           check_inuse_chunk(newp);
03988           return chunk2mem(newp);
03989         }
03990       }
03991     }
03992 
03993     /* If possible, free extra space in old or extended chunk */
03994 
03995     assert((unsigned long)(newsize) >= (unsigned long)(nb));
03996 
03997     remainder_size = newsize - nb;
03998 
03999     if (remainder_size < MINSIZE) { /* not enough extra to split off */
04000       set_head_size(newp, newsize);
04001       set_inuse_bit_at_offset(newp, newsize);
04002     }
04003     else { /* split remainder */
04004       remainder = chunk_at_offset(newp, nb);
04005       set_head_size(newp, nb);
04006       set_head(remainder, remainder_size | PREV_INUSE);
04007       /* Mark remainder as inuse so free() won't complain */
04008       set_inuse_bit_at_offset(remainder, remainder_size);
04009       fREe(chunk2mem(remainder)); 
04010     }
04011 
04012     check_inuse_chunk(newp);
04013     return chunk2mem(newp);
04014   }
04015 
04016   /*
04017     Handle mmap cases
04018   */
04019 
04020   else {
04021 #if HAVE_MMAP
04022 
04023 #if HAVE_MREMAP
04024     INTERNAL_SIZE_T offset = oldp->prev_size;
04025     size_t pagemask = av->pagesize - 1;
04026     char *cp;
04027     unsigned long sum;
04028     
04029     /* Note the extra SIZE_SZ overhead */
04030     newsize = (nb + offset + SIZE_SZ + pagemask) & ~pagemask;
04031 
04032     /* don't need to remap if still within same page */
04033     if (oldsize == newsize - offset) 
04034       return oldmem;
04035 
04036     cp = (char*)mremap((char*)oldp - offset, oldsize + offset, newsize, 1);
04037     
04038     if (cp != (char*)MORECORE_FAILURE) {
04039 
04040       newp = (mchunkptr)(cp + offset);
04041       set_head(newp, (newsize - offset)|IS_MMAPPED);
04042       
04043       assert(aligned_OK(chunk2mem(newp)));
04044       assert((newp->prev_size == offset));
04045       
04046       /* update statistics */
04047       sum = av->mmapped_mem += newsize - oldsize;
04048       if (sum > (unsigned long)(av->max_mmapped_mem)) 
04049         av->max_mmapped_mem = sum;
04050       sum += av->sbrked_mem;
04051       if (sum > (unsigned long)(av->max_total_mem)) 
04052         av->max_total_mem = sum;
04053       
04054       return chunk2mem(newp);
04055     }
04056 #endif
04057 
04058     /* Note the extra SIZE_SZ overhead. */
04059     if ((unsigned long)(oldsize) >= (unsigned long)(nb + SIZE_SZ)) 
04060       newmem = oldmem; /* do nothing */
04061     else {
04062       /* Must alloc, copy, free. */
04063       newmem = mALLOc(nb - MALLOC_ALIGN_MASK);
04064       if (newmem != 0) {
04065         MALLOC_COPY(newmem, oldmem, oldsize - 2*SIZE_SZ);
04066         fREe(oldmem);
04067       }
04068     }
04069     return newmem;
04070 
04071 #else 
04072     /* If !HAVE_MMAP, but chunk_is_mmapped, user must have overwritten mem */
04073     check_malloc_state();
04074     MALLOC_FAILURE_ACTION;
04075     return 0;
04076 #endif
04077   }
04078 }
04079 
04080 /*
04081   ------------------------------ memalign ------------------------------
04082 */
04083 
04084 #if __STD_C
04085 Void_t* mEMALIGn(size_t alignment, size_t bytes)
04086 #else
04087 Void_t* mEMALIGn(alignment, bytes) size_t alignment; size_t bytes;
04088 #endif
04089 {
04090   INTERNAL_SIZE_T nb;             /* padded  request size */
04091   char*           m;              /* memory returned by malloc call */
04092   mchunkptr       p;              /* corresponding chunk */
04093   char*           brk;            /* alignment point within p */
04094   mchunkptr       newp;           /* chunk to return */
04095   INTERNAL_SIZE_T newsize;        /* its size */
04096   INTERNAL_SIZE_T leadsize;       /* leading space before alignment point */
04097   mchunkptr       remainder;      /* spare room at end to split off */
04098   unsigned long   remainder_size; /* its size */
04099   INTERNAL_SIZE_T size;
04100 
04101   /* If need less alignment than we give anyway, just relay to malloc */
04102 
04103   if (alignment <= MALLOC_ALIGNMENT) return mALLOc(bytes);
04104 
04105   /* Otherwise, ensure that it is at least a minimum chunk size */
04106 
04107   if (alignment <  MINSIZE) alignment = MINSIZE;
04108 
04109   /* Make sure alignment is power of 2 (in case MINSIZE is not).  */
04110   if ((alignment & (alignment - 1)) != 0) {
04111     size_t a = MALLOC_ALIGNMENT * 2;
04112     while ((unsigned long)a < (unsigned long)alignment) a <<= 1;
04113     alignment = a;
04114   }
04115 
04116   checked_request2size(bytes, nb);
04117 
04118   /*
04119     Strategy: find a spot within that chunk that meets the alignment
04120     request, and then possibly free the leading and trailing space.
04121   */
04122 
04123 
04124   /* Call malloc with worst case padding to hit alignment. */
04125 
04126   m  = (char*)(mALLOc(nb + alignment + MINSIZE));
04127 
04128   if (m == 0) return 0; /* propagate failure */
04129 
04130   p = mem2chunk(m);
04131 
04132   if ((((unsigned long)(m)) % alignment) != 0) { /* misaligned */
04133 
04134     /*
04135       Find an aligned spot inside chunk.  Since we need to give back
04136       leading space in a chunk of at least MINSIZE, if the first
04137       calculation places us at a spot with less than MINSIZE leader,
04138       we can move to the next aligned spot -- we've allocated enough
04139       total room so that this is always possible.
04140     */
04141 
04142     brk = (char*)mem2chunk(((unsigned long)(m + alignment - 1)) &
04143                            -((signed long) alignment));
04144     if ((unsigned long)(brk - (char*)(p)) < MINSIZE)
04145       brk += alignment;
04146 
04147     newp = (mchunkptr)brk;
04148     leadsize = brk - (char*)(p);
04149     newsize = chunksize(p) - leadsize;
04150 
04151     /* For mmapped chunks, just adjust offset */
04152     if (chunk_is_mmapped(p)) {
04153       newp->prev_size = p->prev_size + leadsize;
04154       set_head(newp, newsize|IS_MMAPPED);
04155       return chunk2mem(newp);
04156     }
04157 
04158     /* Otherwise, give back leader, use the rest */
04159     set_head(newp, newsize | PREV_INUSE);
04160     set_inuse_bit_at_offset(newp, newsize);
04161     set_head_size(p, leadsize);
04162     fREe(chunk2mem(p));
04163     p = newp;
04164 
04165     assert (newsize >= nb &&
04166             (((unsigned long)(chunk2mem(p))) % alignment) == 0);
04167   }
04168 
04169   /* Also give back spare room at the end */
04170   if (!chunk_is_mmapped(p)) {
04171     size = chunksize(p);
04172     if ((unsigned long)(size) > (unsigned long)(nb + MINSIZE)) {
04173       remainder_size = size - nb;
04174       remainder = chunk_at_offset(p, nb);
04175       set_head(remainder, remainder_size | PREV_INUSE);
04176       set_head_size(p, nb);
04177       fREe(chunk2mem(remainder));
04178     }
04179   }
04180 
04181   check_inuse_chunk(p);
04182   return chunk2mem(p);
04183 }
04184 
04185 /*
04186   ------------------------------ calloc ------------------------------
04187 */
04188 
04189 #if __STD_C
04190 Void_t* cALLOc(size_t n_elements, size_t elem_size)
04191 #else
04192 Void_t* cALLOc(n_elements, elem_size) size_t n_elements; size_t elem_size;
04193 #endif
04194 {
04195   mchunkptr p;
04196   unsigned long clearsize;
04197   unsigned long nclears;
04198   INTERNAL_SIZE_T* d;
04199 
04200   Void_t* mem = mALLOc(n_elements * elem_size);
04201 
04202   if (mem != 0) {
04203     p = mem2chunk(mem);
04204 
04205 #if MMAP_CLEARS
04206     if (!chunk_is_mmapped(p)) /* don't need to clear mmapped space */
04207 #endif
04208     {  
04209       /*
04210         Unroll clear of <= 36 bytes (72 if 8byte sizes)
04211         We know that contents have an odd number of
04212         INTERNAL_SIZE_T-sized words; minimally 3.
04213       */
04214 
04215       d = (INTERNAL_SIZE_T*)mem;
04216       clearsize = chunksize(p) - SIZE_SZ;
04217       nclears = clearsize / sizeof(INTERNAL_SIZE_T);
04218       assert(nclears >= 3);
04219 
04220       if (nclears > 9)
04221         MALLOC_ZERO(d, clearsize);
04222 
04223       else {
04224         *(d+0) = 0;
04225         *(d+1) = 0;
04226         *(d+2) = 0;
04227         if (nclears > 4) {
04228           *(d+3) = 0;
04229           *(d+4) = 0;
04230           if (nclears > 6) {
04231             *(d+5) = 0;
04232             *(d+6) = 0;
04233             if (nclears > 8) {
04234               *(d+7) = 0;
04235               *(d+8) = 0;
04236             }
04237           }
04238         }
04239       }
04240     }
04241   }
04242   return mem;
04243 }
04244 
04245 /*
04246   ------------------------------ cfree ------------------------------
04247 */
04248 
04249 #if __STD_C
04250 void cFREe(Void_t *mem)
04251 #else
04252 void cFREe(mem) Void_t *mem;
04253 #endif
04254 {
04255   fREe(mem);
04256 }
04257 
04258 /*
04259   ------------------------- independent_calloc -------------------------
04260 */
04261 
04262 #if __STD_C
04263 Void_t** iCALLOc(size_t n_elements, size_t elem_size, Void_t* chunks[])
04264 #else
04265 Void_t** iCALLOc(n_elements, elem_size, chunks) size_t n_elements; size_t elem_size; Void_t* chunks[];
04266 #endif
04267 {
04268   size_t sz = elem_size; /* serves as 1-element array */
04269   /* opts arg of 3 means all elements are same size, and should be cleared */
04270   return iALLOc(n_elements, &sz, 3, chunks);
04271 }
04272 
04273 /*
04274   ------------------------- independent_comalloc -------------------------
04275 */
04276 
04277 #if __STD_C
04278 Void_t** iCOMALLOc(size_t n_elements, size_t sizes[], Void_t* chunks[])
04279 #else
04280 Void_t** iCOMALLOc(n_elements, sizes, chunks) size_t n_elements; size_t sizes[]; Void_t* chunks[];
04281 #endif
04282 {
04283   return iALLOc(n_elements, sizes, 0, chunks);
04284 }
04285 
04286 
04287 /*
04288   ------------------------------ ialloc ------------------------------
04289   ialloc provides common support for independent_X routines, handling all of
04290   the combinations that can result.
04291 
04292   The opts arg has:
04293     bit 0 set if all elements are same size (using sizes[0])
04294     bit 1 set if elements should be zeroed
04295 */
04296 
04297 
04298 #if __STD_C
04299 static Void_t** iALLOc(size_t n_elements, 
04300                        size_t* sizes,  
04301                        int opts,
04302                        Void_t* chunks[])
04303 #else
04304 static Void_t** iALLOc(n_elements, sizes, opts, chunks) size_t n_elements; size_t* sizes; int opts; Void_t* chunks[];
04305 #endif
04306 {
04307   mstate av = get_malloc_state();
04308   INTERNAL_SIZE_T element_size;   /* chunksize of each element, if all same */
04309   INTERNAL_SIZE_T contents_size;  /* total size of elements */
04310   INTERNAL_SIZE_T array_size;     /* request size of pointer array */
04311   Void_t*         mem;            /* malloced aggregate space */
04312   mchunkptr       p;              /* corresponding chunk */
04313   INTERNAL_SIZE_T remainder_size; /* remaining bytes while splitting */
04314   Void_t**        marray;         /* either "chunks" or malloced ptr array */
04315   mchunkptr       array_chunk;    /* chunk for malloced ptr array */
04316   int             mmx;            /* to disable mmap */
04317   INTERNAL_SIZE_T size;           
04318   size_t          i;
04319 
04320   /* Ensure initialization/consolidation */
04321   if (have_fastchunks(av)) malloc_consolidate(av);
04322 
04323   /* compute array length, if needed */
04324   if (chunks != 0) {
04325     if (n_elements == 0)
04326       return chunks; /* nothing to do */
04327     marray = chunks;
04328     array_size = 0;
04329   }
04330   else {
04331     /* if empty req, must still return chunk representing empty array */
04332     if (n_elements == 0) 
04333       return (Void_t**) mALLOc(0);
04334     marray = 0;
04335     array_size = request2size(n_elements * (sizeof(Void_t*)));
04336   }
04337 
04338   /* compute total element size */
04339   if (opts & 0x1) { /* all-same-size */
04340     element_size = request2size(*sizes);
04341     contents_size = n_elements * element_size;
04342   }
04343   else { /* add up all the sizes */
04344     element_size = 0;
04345     contents_size = 0;
04346     for (i = 0; i != n_elements; ++i) 
04347       contents_size += request2size(sizes[i]);     
04348   }
04349 
04350   /* subtract out alignment bytes from total to minimize overallocation */
04351   size = contents_size + array_size - MALLOC_ALIGN_MASK;
04352   
04353   /* 
04354      Allocate the aggregate chunk.
04355      But first disable mmap so malloc won't use it, since
04356      we would not be able to later free/realloc space internal
04357      to a segregated mmap region.
04358  */
04359   mmx = av->n_mmaps_max;   /* disable mmap */
04360   av->n_mmaps_max = 0;
04361   mem = mALLOc(size);
04362   av->n_mmaps_max = mmx;   /* reset mmap */
04363   if (mem == 0) 
04364     return 0;
04365 
04366   p = mem2chunk(mem);
04367   assert(!chunk_is_mmapped(p)); 
04368   remainder_size = chunksize(p);
04369 
04370   if (opts & 0x2) {       /* optionally clear the elements */
04371     MALLOC_ZERO(mem, remainder_size - SIZE_SZ - array_size);
04372   }
04373 
04374   /* If not provided, allocate the pointer array as final part of chunk */
04375   if (marray == 0) {
04376     array_chunk = chunk_at_offset(p, contents_size);
04377     marray = (Void_t**) (chunk2mem(array_chunk));
04378     set_head(array_chunk, (remainder_size - contents_size) | PREV_INUSE);
04379     remainder_size = contents_size;
04380   }
04381 
04382   /* split out elements */
04383   for (i = 0; ; ++i) {
04384     marray[i] = chunk2mem(p);
04385     if (i != n_elements-1) {
04386       if (element_size != 0) 
04387         size = element_size;
04388       else
04389         size = request2size(sizes[i]);          
04390       remainder_size -= size;
04391       set_head(p, size | PREV_INUSE);
04392       p = chunk_at_offset(p, size);
04393     }
04394     else { /* the final element absorbs any overallocation slop */
04395       set_head(p, remainder_size | PREV_INUSE);
04396       break;
04397     }
04398   }
04399 
04400 #if DEBUG
04401   if (marray != chunks) {
04402     /* final element must have exactly exhausted chunk */
04403     if (element_size != 0) 
04404       assert(remainder_size == element_size);
04405     else
04406       assert(remainder_size == request2size(sizes[i]));
04407     check_inuse_chunk(mem2chunk(marray));
04408   }
04409 
04410   for (i = 0; i != n_elements; ++i)
04411     check_inuse_chunk(mem2chunk(marray[i]));
04412 #endif
04413 
04414   return marray;
04415 }
04416 
04417 
04418 /*
04419   ------------------------------ valloc ------------------------------
04420 */
04421 
04422 #if __STD_C
04423 Void_t* vALLOc(size_t bytes)
04424 #else
04425 Void_t* vALLOc(bytes) size_t bytes;
04426 #endif
04427 {
04428   /* Ensure initialization/consolidation */
04429   mstate av = get_malloc_state();
04430   if (have_fastchunks(av)) malloc_consolidate(av);
04431   return mEMALIGn(av->pagesize, bytes);
04432 }
04433 
04434 /*
04435   ------------------------------ pvalloc ------------------------------
04436 */
04437 
04438 
04439 #if __STD_C
04440 Void_t* pVALLOc(size_t bytes)
04441 #else
04442 Void_t* pVALLOc(bytes) size_t bytes;
04443 #endif
04444 {
04445   mstate av = get_malloc_state();
04446   size_t pagesz;
04447 
04448   /* Ensure initialization/consolidation */
04449   if (have_fastchunks(av)) malloc_consolidate(av);
04450   pagesz = av->pagesize;
04451   return mEMALIGn(pagesz, (bytes + pagesz - 1) & ~(pagesz - 1));
04452 }
04453    
04454 
04455 /*
04456   ------------------------------ malloc_trim ------------------------------
04457 */
04458 
04459 #if __STD_C
04460 int mTRIm(size_t pad)
04461 #else
04462 int mTRIm(pad) size_t pad;
04463 #endif
04464 {
04465   mstate av = get_malloc_state();
04466   /* Ensure initialization/consolidation */
04467   malloc_consolidate(av);
04468 
04469 #ifndef MORECORE_CANNOT_TRIM        
04470   return sYSTRIm(pad, av);
04471 #else
04472   return 0;
04473 #endif
04474 }
04475 
04476 
04477 /*
04478   ------------------------- malloc_usable_size -------------------------
04479 */
04480 
04481 #if __STD_C
04482 size_t mUSABLe(Void_t* mem)
04483 #else
04484 size_t mUSABLe(mem) Void_t* mem;
04485 #endif
04486 {
04487   mchunkptr p;
04488   if (mem != 0) {
04489     p = mem2chunk(mem);
04490     if (chunk_is_mmapped(p))
04491       return chunksize(p) - 2*SIZE_SZ;
04492     else if (inuse(p))
04493       return chunksize(p) - SIZE_SZ;
04494   }
04495   return 0;
04496 }
04497 
04498 /*
04499   ------------------------------ mallinfo ------------------------------
04500 */
04501 
04502 struct mallinfo mALLINFo()
04503 {
04504   mstate av = get_malloc_state();
04505   struct mallinfo mi;
04506   int i;
04507   mbinptr b;
04508   mchunkptr p;
04509   INTERNAL_SIZE_T avail;
04510   INTERNAL_SIZE_T fastavail;
04511   int nblocks;
04512   int nfastblocks;
04513 
04514   /* Ensure initialization */
04515   if (av->top == 0)  malloc_consolidate(av);
04516 
04517   check_malloc_state();
04518 
04519   /* Account for top */
04520   avail = chunksize(av->top);
04521   nblocks = 1;  /* top always exists */
04522 
04523   /* traverse fastbins */
04524   nfastblocks = 0;
04525   fastavail = 0;
04526 
04527   for (i = 0; i < NFASTBINS; ++i) {
04528     for (p = av->fastbins[i]; p != 0; p = p->fd) {
04529       ++nfastblocks;
04530       fastavail += chunksize(p);
04531     }
04532   }
04533 
04534   avail += fastavail;
04535 
04536   /* traverse regular bins */
04537   for (i = 1; i < NBINS; ++i) {
04538     b = bin_at(av, i);
04539     for (p = last(b); p != b; p = p->bk) {
04540       ++nblocks;
04541       avail += chunksize(p);
04542     }
04543   }
04544 
04545   mi.smblks = nfastblocks;
04546   mi.ordblks = nblocks;
04547   mi.fordblks = avail;
04548   mi.uordblks = av->sbrked_mem - avail;
04549   mi.arena = av->sbrked_mem;
04550   mi.hblks = av->n_mmaps;
04551   mi.hblkhd = av->mmapped_mem;
04552   mi.fsmblks = fastavail;
04553   mi.keepcost = chunksize(av->top);
04554   mi.usmblks = av->max_total_mem;
04555   return mi;
04556 }
04557 
04558 /*
04559   ------------------------------ malloc_stats ------------------------------
04560 */
04561 
04562 void mSTATs()
04563 {
04564   struct mallinfo mi = mALLINFo();
04565 
04566   /* EDB: Disable stats. */
04567 
04568 #ifndef WIN32
04569 
04570 #ifdef WIN32
04571   {
04572     unsigned long free, reserved, committed;
04573     vminfo (&free, &reserved, &committed);
04574     fprintf(stderr, "free bytes       = %10lu\n", 
04575             free);
04576     fprintf(stderr, "reserved bytes   = %10lu\n", 
04577             reserved);
04578     fprintf(stderr, "committed bytes  = %10lu\n", 
04579             committed);
04580   }
04581 #endif
04582 
04583 
04584   fprintf(stderr, "max system bytes = %10lu\n",
04585           (unsigned long)(mi.usmblks));
04586   fprintf(stderr, "system bytes     = %10lu\n",
04587           (unsigned long)(mi.arena + mi.hblkhd));
04588   fprintf(stderr, "in use bytes     = %10lu\n",
04589           (unsigned long)(mi.uordblks + mi.hblkhd));
04590 
04591 
04592 #ifdef WIN32 
04593   {
04594     unsigned long kernel, user;
04595     if (cpuinfo (TRUE, &kernel, &user)) {
04596       fprintf(stderr, "kernel ms        = %10lu\n", 
04597               kernel);
04598       fprintf(stderr, "user ms          = %10lu\n", 
04599               user);
04600     }
04601   }
04602 #endif
04603 #endif
04604 
04605 }
04606 
04607 
04608 /*
04609   ------------------------------ mallopt ------------------------------
04610 */
04611 
04612 #if __STD_C
04613 int mALLOPt(int param_number, int value)
04614 #else
04615 int mALLOPt(param_number, value) int param_number; int value;
04616 #endif
04617 {
04618   mstate av = get_malloc_state();
04619   /* Ensure initialization/consolidation */
04620   malloc_consolidate(av);
04621 
04622   switch(param_number) {
04623   case M_MXFAST:
04624     if (value >= 0 && value <= MAX_FAST_SIZE) {
04625       set_max_fast(av, value);
04626       return 1;
04627     }
04628     else
04629       return 0;
04630 
04631   case M_TRIM_THRESHOLD:
04632     av->trim_threshold = value;
04633     return 1;
04634 
04635   case M_TOP_PAD:
04636     av->top_pad = value;
04637     return 1;
04638 
04639   case M_MMAP_THRESHOLD:
04640     av->mmap_threshold = value;
04641     return 1;
04642 
04643   case M_MMAP_MAX:
04644 #if !HAVE_MMAP
04645     if (value != 0)
04646       return 0;
04647 #endif
04648     av->n_mmaps_max = value;
04649     return 1;
04650 
04651   default:
04652     return 0;
04653   }
04654 }
04655 
04656 
04657 /* 
04658   -------------------- Alternative MORECORE functions --------------------
04659 */
04660 
04661 
04662 /*
04663   General Requirements for MORECORE.
04664 
04665   The MORECORE function must have the following properties:
04666 
04667   If MORECORE_CONTIGUOUS is false:
04668 
04669     * MORECORE must allocate in multiples of pagesize. It will
04670       only be called with arguments that are multiples of pagesize.
04671 
04672     * MORECORE(0) must return an address that is at least 
04673       MALLOC_ALIGNMENT aligned. (Page-aligning always suffices.)
04674 
04675   else (i.e. If MORECORE_CONTIGUOUS is true):
04676 
04677     * Consecutive calls to MORECORE with positive arguments
04678       return increasing addresses, indicating that space has been
04679       contiguously extended.
04680 
04681     * MORECORE need not allocate in multiples of pagesize.
04682       Calls to MORECORE need not have args of multiples of pagesize.
04683 
04684     * MORECORE need not page-align.
04685 
04686   In either case:
04687 
04688     * MORECORE may allocate more memory than requested. (Or even less,
04689       but this will generally result in a malloc failure.)
04690 
04691     * MORECORE must not allocate memory when given argument zero, but
04692       instead return one past the end address of memory from previous
04693       nonzero call. This malloc does NOT call MORECORE(0)
04694       until at least one call with positive arguments is made, so
04695       the initial value returned is not important.
04696 
04697     * Even though consecutive calls to MORECORE need not return contiguous
04698       addresses, it must be OK for malloc'ed chunks to span multiple
04699       regions in those cases where they do happen to be contiguous.
04700 
04701     * MORECORE need not handle negative arguments -- it may instead
04702       just return MORECORE_FAILURE when given negative arguments.
04703       Negative arguments are always multiples of pagesize. MORECORE
04704       must not misinterpret negative args as large positive unsigned
04705       args. You can suppress all such calls from even occurring by defining
04706       MORECORE_CANNOT_TRIM,
04707 
04708   There is some variation across systems about the type of the
04709   argument to sbrk/MORECORE. If size_t is unsigned, then it cannot
04710   actually be size_t, because sbrk supports negative args, so it is
04711   normally the signed type of the same width as size_t (sometimes
04712   declared as "intptr_t", and sometimes "ptrdiff_t").  It doesn't much
04713   matter though. Internally, we use "long" as arguments, which should
04714   work across all reasonable possibilities.
04715 
04716   Additionally, if MORECORE ever returns failure for a positive
04717   request, and HAVE_MMAP is true, then mmap is used as a noncontiguous
04718   system allocator. This is a useful backup strategy for systems with
04719   holes in address spaces -- in this case sbrk cannot contiguously
04720   expand the heap, but mmap may be able to map noncontiguous space.
04721 
04722   If you'd like mmap to ALWAYS be used, you can define MORECORE to be
04723   a function that always returns MORECORE_FAILURE.
04724 
04725   If you are using this malloc with something other than sbrk (or its
04726   emulation) to supply memory regions, you probably want to set
04727   MORECORE_CONTIGUOUS as false.  As an example, here is a custom
04728   allocator kindly contributed for pre-OSX macOS.  It uses virtually
04729   but not necessarily physically contiguous non-paged memory (locked
04730   in, present and won't get swapped out).  You can use it by
04731   uncommenting this section, adding some #includes, and setting up the
04732   appropriate defines above:
04733 
04734       #define MORECORE osMoreCore
04735       #define MORECORE_CONTIGUOUS 0
04736 
04737   There is also a shutdown routine that should somehow be called for
04738   cleanup upon program exit.
04739 
04740   #define MAX_POOL_ENTRIES 100
04741   #define MINIMUM_MORECORE_SIZE  (64 * 1024)
04742   static int next_os_pool;
04743   void *our_os_pools[MAX_POOL_ENTRIES];
04744 
04745   void *osMoreCore(int size)
04746   {
04747     void *ptr = 0;
04748     static void *sbrk_top = 0;
04749 
04750     if (size > 0)
04751     {
04752       if (size < MINIMUM_MORECORE_SIZE)
04753          size = MINIMUM_MORECORE_SIZE;
04754       if (CurrentExecutionLevel() == kTaskLevel)
04755          ptr = PoolAllocateResident(size + RM_PAGE_SIZE, 0);
04756       if (ptr == 0)
04757       {
04758         return (void *) MORECORE_FAILURE;
04759       }
04760       // save ptrs so they can be freed during cleanup
04761       our_os_pools[next_os_pool] = ptr;
04762       next_os_pool++;
04763       ptr = (void *) ((((unsigned long) ptr) + RM_PAGE_MASK) & ~RM_PAGE_MASK);
04764       sbrk_top = (char *) ptr + size;
04765       return ptr;
04766     }
04767     else if (size < 0)
04768     {
04769       // we don't currently support shrink behavior
04770       return (void *) MORECORE_FAILURE;
04771     }
04772     else
04773     {
04774       return sbrk_top;
04775     }
04776   }
04777 
04778   // cleanup any allocated memory pools
04779   // called as last thing before shutting down driver
04780 
04781   void osCleanupMem(void)
04782   {
04783     void **ptr;
04784 
04785     for (ptr = our_os_pools; ptr < &our_os_pools[MAX_POOL_ENTRIES]; ptr++)
04786       if (*ptr)
04787       {
04788          PoolDeallocate(*ptr);
04789          *ptr = 0;
04790       }
04791   }
04792 
04793 */
04794 
04795 
04796 /* 
04797   -------------------------------------------------------------- 
04798 
04799   Emulation of sbrk for win32. 
04800   Donated by J. Walter <Walter@GeNeSys-e.de>.
04801   For additional information about this code, and malloc on Win32, see 
04802      http://www.genesys-e.de/jwalter/
04803 */
04804 
04805 
04806 #ifdef WIN32
04807 
04808 #ifdef _DEBUG
04809 /* #define TRACE */
04810 #endif
04811 
04812 /* Support for USE_MALLOC_LOCK */
04813 #ifdef USE_MALLOC_LOCK
04814 
04815 /* Wait for spin lock */
04816 static int slwait (int *sl) {
04817     while (InterlockedCompareExchange ((void **) sl, (void *) 1, (void *) 0) != 0) 
04818             Sleep (0);
04819     return 0;
04820 }
04821 
04822 /* Release spin lock */
04823 static int slrelease (int *sl) {
04824     InterlockedExchange (sl, 0);
04825     return 0;
04826 }
04827 
04828 #ifdef NEEDED
04829 /* Spin lock for emulation code */
04830 static int g_sl;
04831 #endif
04832 
04833 #endif /* USE_MALLOC_LOCK */
04834 
04835 /* getpagesize for windows */
04836 static long getpagesize (void) {
04837     static long g_pagesize = 0;
04838     if (! g_pagesize) {
04839         SYSTEM_INFO system_info;
04840         GetSystemInfo (&system_info);
04841         g_pagesize = system_info.dwPageSize;
04842     }
04843     return g_pagesize;
04844 }
04845 static long getregionsize (void) {
04846     static long g_regionsize = 0;
04847     if (! g_regionsize) {
04848         SYSTEM_INFO system_info;
04849         GetSystemInfo (&system_info);
04850         g_regionsize = system_info.dwAllocationGranularity;
04851     }
04852     return g_regionsize;
04853 }
04854 
04855 /* A region list entry */
04856 typedef struct _region_list_entry {
04857     void *top_allocated;
04858     void *top_committed;
04859     void *top_reserved;
04860     long reserve_size;
04861     struct _region_list_entry *previous;
04862 } region_list_entry;
04863 
04864 /* Allocate and link a region entry in the region list */
04865 static int region_list_append (region_list_entry **last, void *base_reserved, long reserve_size) {
04866     region_list_entry *next = HeapAlloc (GetProcessHeap (), 0, sizeof (region_list_entry));
04867     if (! next)
04868         return FALSE;
04869     next->top_allocated = (char *) base_reserved;
04870     next->top_committed = (char *) base_reserved;
04871     next->top_reserved = (char *) base_reserved + reserve_size;
04872     next->reserve_size = reserve_size;
04873     next->previous = *last;
04874     *last = next;
04875     return TRUE;
04876 }
04877 /* Free and unlink the last region entry from the region list */
04878 static int region_list_remove (region_list_entry **last) {
04879     region_list_entry *previous = (*last)->previous;
04880     if (! HeapFree (GetProcessHeap (), sizeof (region_list_entry), *last))
04881         return FALSE;
04882     *last = previous;
04883     return TRUE;
04884 }
04885 
04886 #define CEIL(size,to)   (((size)+(to)-1)&~((to)-1))
04887 #define FLOOR(size,to)  ((size)&~((to)-1))
04888 
04889 #define SBRK_SCALE  0
04890 /* #define SBRK_SCALE  1 */
04891 /* #define SBRK_SCALE  2 */
04892 /* #define SBRK_SCALE  4  */
04893 
04894 /* sbrk for windows */
04895 static void *sbrk (long size) {
04896     static long g_pagesize, g_my_pagesize;
04897     static long g_regionsize, g_my_regionsize;
04898     static region_list_entry *g_last;
04899     void *result = (void *) MORECORE_FAILURE;
04900 #ifdef TRACE
04901     printf ("sbrk %d\n", size);
04902 #endif
04903 #if defined (USE_MALLOC_LOCK) && defined (NEEDED)
04904     /* Wait for spin lock */
04905     slwait (&g_sl);
04906 #endif
04907     /* First time initialization */
04908     if (! g_pagesize) {
04909         g_pagesize = getpagesize ();
04910         g_my_pagesize = g_pagesize << SBRK_SCALE;
04911     }
04912     if (! g_regionsize) {
04913         g_regionsize = getregionsize ();
04914         g_my_regionsize = g_regionsize << SBRK_SCALE;
04915     }
04916     if (! g_last) {
04917         if (! region_list_append (&g_last, 0, 0)) {
04918            goto sbrk_exit;
04919                 }
04920     }
04921     /* Assert invariants */
04922     assert (g_last);
04923     assert ((char *) g_last->top_reserved - g_last->reserve_size <= (char *) g_last->top_allocated &&
04924             g_last->top_allocated <= g_last->top_committed);
04925     assert ((char *) g_last->top_reserved - g_last->reserve_size <= (char *) g_last->top_committed &&
04926             g_last->top_committed <= g_last->top_reserved &&
04927             (unsigned) g_last->top_committed % g_pagesize == 0);
04928     assert ((unsigned) g_last->top_reserved % g_regionsize == 0);
04929     assert ((unsigned) g_last->reserve_size % g_regionsize == 0);
04930     /* Allocation requested? */
04931     if (size >= 0) {
04932         /* Allocation size is the requested size */
04933         long allocate_size = size;
04934         /* Compute the size to commit */
04935         long to_commit = (char *) g_last->top_allocated + allocate_size - (char *) g_last->top_committed;
04936         /* Do we reach the commit limit? */
04937         if (to_commit > 0) {
04938             /* Round size to commit */
04939             long commit_size = CEIL (to_commit, g_my_pagesize);
04940             /* Compute the size to reserve */
04941             long to_reserve = (char *) g_last->top_committed + commit_size - (char *) g_last->top_reserved;
04942             /* Do we reach the reserve limit? */
04943             if (to_reserve > 0) {
04944                 /* Compute the remaining size to commit in the current region */
04945                 long remaining_commit_size = (char *) g_last->top_reserved - (char *) g_last->top_committed;
04946                 if (remaining_commit_size > 0) {
04947                     /* Assert preconditions */
04948                     assert ((unsigned) g_last->top_committed % g_pagesize == 0);
04949                     assert (0 < remaining_commit_size && remaining_commit_size % g_pagesize == 0); {
04950                         /* Commit this */
04951                         void *base_committed = VirtualAlloc (g_last->top_committed, remaining_commit_size,
04952                                                                                          MEM_COMMIT, PAGE_READWRITE);
04953                         /* Check returned pointer for consistency */
04954                         if (base_committed != g_last->top_committed) {
04955                             goto sbrk_exit;
04956                                                 }
04957                         /* Assert postconditions */
04958                         assert ((unsigned) base_committed % g_pagesize == 0);
04959 #ifdef TRACE
04960                         printf ("Commit %p %d\n", base_committed, remaining_commit_size);
04961 #endif
04962                         /* Adjust the regions commit top */
04963                         g_last->top_committed = (char *) base_committed + remaining_commit_size;
04964                     }
04965                 } {
04966                     /* Now we are going to search and reserve. */
04967                     int contiguous = -1;
04968                     int found = FALSE;
04969                     MEMORY_BASIC_INFORMATION memory_info;
04970                     void *base_reserved;
04971                     long reserve_size;
04972                     do {
04973                         /* Assume contiguous memory */
04974                         contiguous = TRUE;
04975                         /* Round size to reserve */
04976                         reserve_size = CEIL (to_reserve, g_my_regionsize);
04977                         /* Start with the current region's top */
04978                         memory_info.BaseAddress = g_last->top_reserved;
04979                         /* Assert preconditions */
04980                         assert ((unsigned) memory_info.BaseAddress % g_pagesize == 0);
04981                         assert (0 < reserve_size && reserve_size % g_regionsize == 0);
04982                         while (VirtualQuery (memory_info.BaseAddress, &memory_info, sizeof (memory_info))) {
04983                             /* Assert postconditions */
04984                             assert ((unsigned) memory_info.BaseAddress % g_pagesize == 0);
04985 #ifdef TRACE
04986                             printf ("Query %p %d %s\n", memory_info.BaseAddress, memory_info.RegionSize, 
04987                                     memory_info.State == MEM_FREE ? "FREE": 
04988                                     (memory_info.State == MEM_RESERVE ? "RESERVED":
04989                                      (memory_info.State == MEM_COMMIT ? "COMMITTED": "?")));
04990 #endif
04991                             /* Region is free, well aligned and big enough: we are done */
04992                             if (memory_info.State == MEM_FREE &&
04993                                 (unsigned) memory_info.BaseAddress % g_regionsize == 0 &&
04994                                 memory_info.RegionSize >= (unsigned) reserve_size) {
04995                                 found = TRUE;
04996                                 break;
04997                             }
04998                             /* From now on we can't get contiguous memory! */
04999                             contiguous = FALSE;
05000                             /* Recompute size to reserve */
05001                             reserve_size = CEIL (allocate_size, g_my_regionsize);
05002                             memory_info.BaseAddress = (char *) memory_info.BaseAddress + memory_info.RegionSize;
05003                             /* Assert preconditions */
05004                             assert ((unsigned) memory_info.BaseAddress % g_pagesize == 0);
05005                             assert (0 < reserve_size && reserve_size % g_regionsize == 0);
05006                         }
05007                         /* Search failed? */
05008                         if (! found) {
05009                             goto sbrk_exit;
05010                                                 }
05011                         /* Assert preconditions */
05012                         assert ((unsigned) memory_info.BaseAddress % g_regionsize == 0);
05013                         assert (0 < reserve_size && reserve_size % g_regionsize == 0);
05014                         /* Try to reserve this */
05015                         base_reserved = VirtualAlloc (memory_info.BaseAddress, reserve_size, 
05016                                                                           MEM_RESERVE, PAGE_NOACCESS);
05017                         if (! base_reserved) {
05018                             int rc = GetLastError ();
05019                             if (rc != ERROR_INVALID_ADDRESS) {
05020                                 goto sbrk_exit;
05021                                                         }
05022                         }
05023                         /* A null pointer signals (hopefully) a race condition with another thread. */
05024                         /* In this case, we try again. */
05025                     } while (! base_reserved);
05026                     /* Check returned pointer for consistency */
05027                     if (memory_info.BaseAddress && base_reserved != memory_info.BaseAddress) {
05028                         goto sbrk_exit;
05029                                         }
05030                     /* Assert postconditions */
05031                     assert ((unsigned) base_reserved % g_regionsize == 0);
05032 #ifdef TRACE
05033                     printf ("Reserve %p %d\n", base_reserved, reserve_size);
05034 #endif
05035                     /* Did we get contiguous memory? */
05036                     if (contiguous) {
05037                         long start_size = (char *) g_last->top_committed - (char *) g_last->top_allocated;
05038                         /* Adjust allocation size */
05039                         allocate_size -= start_size;
05040                         /* Adjust the regions allocation top */
05041                         g_last->top_allocated = g_last->top_committed;
05042                         /* Recompute the size to commit */
05043                         to_commit = (char *) g_last->top_allocated + allocate_size - (char *) g_last->top_committed;
05044                         /* Round size to commit */
05045                         commit_size = CEIL (to_commit, g_my_pagesize);
05046                     } 
05047                     /* Append the new region to the list */
05048                     if (! region_list_append (&g_last, base_reserved, reserve_size)) {
05049                         goto sbrk_exit;
05050                                         }
05051                     /* Didn't we get contiguous memory? */
05052                     if (! contiguous) {
05053                         /* Recompute the size to commit */
05054                         to_commit = (char *) g_last->top_allocated + allocate_size - (char *) g_last->top_committed;
05055                         /* Round size to commit */
05056                         commit_size = CEIL (to_commit, g_my_pagesize);
05057                     }
05058                 }
05059             } 
05060             /* Assert preconditions */
05061             assert ((unsigned) g_last->top_committed % g_pagesize == 0);
05062             assert (0 < commit_size && commit_size % g_pagesize == 0); {
05063                 /* Commit this */
05064                 void *base_committed = VirtualAlloc (g_last->top_committed, commit_size, 
05065                                                                              MEM_COMMIT, PAGE_READWRITE);
05066                 /* Check returned pointer for consistency */
05067                 if (base_committed != g_last->top_committed) {
05068                     goto sbrk_exit;
05069                                 }
05070                 /* Assert postconditions */
05071                 assert ((unsigned) base_committed % g_pagesize == 0);
05072 #ifdef TRACE
05073                 printf ("Commit %p %d\n", base_committed, commit_size);
05074 #endif
05075                 /* Adjust the regions commit top */
05076                 g_last->top_committed = (char *) base_committed + commit_size;
05077             }
05078         } 
05079         /* Adjust the regions allocation top */
05080         g_last->top_allocated = (char *) g_last->top_allocated + allocate_size;
05081         result = (char *) g_last->top_allocated - size;
05082     /* Deallocation requested? */
05083     } else if (size < 0) {
05084         long deallocate_size = - size;
05085         /* As long as we have a region to release */
05086         while ((char *) g_last->top_allocated - deallocate_size < (char *) g_last->top_reserved - g_last->reserve_size) {
05087             /* Get the size to release */
05088             long release_size = g_last->reserve_size;
05089             /* Get the base address */
05090             void *base_reserved = (char *) g_last->top_reserved - release_size;
05091             /* Assert preconditions */
05092             assert ((unsigned) base_reserved % g_regionsize == 0); 
05093             assert (0 < release_size && release_size % g_regionsize == 0); {
05094                 /* Release this */
05095                 int rc = VirtualFree (base_reserved, 0, 
05096                                       MEM_RELEASE);
05097                 /* Check returned code for consistency */
05098                 if (! rc) {
05099                     goto sbrk_exit;
05100                                 }
05101 #ifdef TRACE
05102                 printf ("Release %p %d\n", base_reserved, release_size);
05103 #endif
05104             }
05105             /* Adjust deallocation size */
05106             deallocate_size -= (char *) g_last->top_allocated - (char *) base_reserved;
05107             /* Remove the old region from the list */
05108             if (! region_list_remove (&g_last)) {
05109                 goto sbrk_exit;
05110                         }
05111         } {
05112             /* Compute the size to decommit */
05113             long to_decommit = (char *) g_last->top_committed - ((char *) g_last->top_allocated - deallocate_size);
05114             if (to_decommit >= g_my_pagesize) {
05115                 /* Compute the size to decommit */
05116                 long decommit_size = FLOOR (to_decommit, g_my_pagesize);
05117                 /*  Compute the base address */
05118                 void *base_committed = (char *) g_last->top_committed - decommit_size;
05119                 /* Assert preconditions */
05120                 assert ((unsigned) base_committed % g_pagesize == 0);
05121                 assert (0 < decommit_size && decommit_size % g_pagesize == 0); {
05122                     /* Decommit this */
05123                     int rc = VirtualFree ((char *) base_committed, decommit_size, 
05124                                           MEM_DECOMMIT);
05125                     /* Check returned code for consistency */
05126                     if (! rc) {
05127                         goto sbrk_exit;
05128                                         }
05129 #ifdef TRACE
05130                     printf ("Decommit %p %d\n", base_committed, decommit_size);
05131 #endif
05132                 }
05133                 /* Adjust deallocation size and regions commit and allocate top */
05134                 deallocate_size -= (char *) g_last->top_allocated - (char *) base_committed;
05135                 g_last->top_committed = base_committed;
05136                 g_last->top_allocated = base_committed;
05137             }
05138         }
05139         /* Adjust regions allocate top */
05140         g_last->top_allocated = (char *) g_last->top_allocated - deallocate_size;
05141         /* Check for underflow */
05142         if ((char *) g_last->top_reserved - g_last->reserve_size > (char *) g_last->top_allocated ||
05143             g_last->top_allocated > g_last->top_committed) {
05144             /* Adjust regions allocate top */
05145             g_last->top_allocated = (char *) g_last->top_reserved - g_last->reserve_size;
05146             goto sbrk_exit;
05147         }
05148         result = g_last->top_allocated;
05149     }
05150     /* Assert invariants */
05151     assert (g_last);
05152     assert ((char *) g_last->top_reserved - g_last->reserve_size <= (char *) g_last->top_allocated &&
05153             g_last->top_allocated <= g_last->top_committed);
05154     assert ((char *) g_last->top_reserved - g_last->reserve_size <= (char *) g_last->top_committed &&
05155             g_last->top_committed <= g_last->top_reserved &&
05156             (unsigned) g_last->top_committed % g_pagesize == 0);
05157     assert ((unsigned) g_last->top_reserved % g_regionsize == 0);
05158     assert ((unsigned) g_last->reserve_size % g_regionsize == 0);
05159 
05160 sbrk_exit:
05161 #if defined (USE_MALLOC_LOCK) && defined (NEEDED)
05162     /* Release spin lock */
05163     slrelease (&g_sl);
05164 #endif
05165     return result;
05166 }
05167 
05168 /* mmap for windows */
05169 static void *mmap (void *ptr, long size, long prot, long type, long handle, long arg) {
05170     static long g_pagesize;
05171     static long g_regionsize;
05172 #ifdef TRACE
05173     printf ("mmap %d\n", size);
05174 #endif
05175 #if defined (USE_MALLOC_LOCK) && defined (NEEDED)
05176     /* Wait for spin lock */
05177     slwait (&g_sl);
05178 #endif
05179     /* First time initialization */
05180     if (! g_pagesize) 
05181         g_pagesize = getpagesize ();
05182     if (! g_regionsize) 
05183         g_regionsize = getregionsize ();
05184     /* Assert preconditions */
05185     assert ((unsigned) ptr % g_regionsize == 0);
05186     assert (size % g_pagesize == 0);
05187     /* Allocate this */
05188     ptr = VirtualAlloc (ptr, size,
05189                                             MEM_RESERVE | MEM_COMMIT | MEM_TOP_DOWN, PAGE_READWRITE);
05190     if (! ptr) {
05191         ptr = (void *) MORECORE_FAILURE;
05192         goto mmap_exit;
05193     }
05194     /* Assert postconditions */
05195     assert ((unsigned) ptr % g_regionsize == 0);
05196 #ifdef TRACE
05197     printf ("Commit %p %d\n", ptr, size);
05198 #endif
05199 mmap_exit:
05200 #if defined (USE_MALLOC_LOCK) && defined (NEEDED)
05201     /* Release spin lock */
05202     slrelease (&g_sl);
05203 #endif
05204     return ptr;
05205 }
05206 
05207 /* munmap for windows */
05208 static long munmap (void *ptr, long size) {
05209     static long g_pagesize;
05210     static long g_regionsize;
05211     int rc = MUNMAP_FAILURE;
05212 #ifdef TRACE
05213     printf ("munmap %p %d\n", ptr, size);
05214 #endif
05215 #if defined (USE_MALLOC_LOCK) && defined (NEEDED)
05216     /* Wait for spin lock */
05217     slwait (&g_sl);
05218 #endif
05219     /* First time initialization */
05220     if (! g_pagesize) 
05221         g_pagesize = getpagesize ();
05222     if (! g_regionsize) 
05223         g_regionsize = getregionsize ();
05224     /* Assert preconditions */
05225     assert ((unsigned) ptr % g_regionsize == 0);
05226     assert (size % g_pagesize == 0);
05227     /* Free this */
05228     if (! VirtualFree (ptr, 0, 
05229                        MEM_RELEASE))
05230         goto munmap_exit;
05231     rc = 0;
05232 #ifdef TRACE
05233     printf ("Release %p %d\n", ptr, size);
05234 #endif
05235 munmap_exit:
05236 #if defined (USE_MALLOC_LOCK) && defined (NEEDED)
05237     /* Release spin lock */
05238     slrelease (&g_sl);
05239 #endif
05240     return rc;
05241 }
05242 
05243 static void vminfo (unsigned long *free, unsigned long *reserved, unsigned long *committed) {
05244     MEMORY_BASIC_INFORMATION memory_info;
05245     memory_info.BaseAddress = 0;
05246     *free = *reserved = *committed = 0;
05247     while (VirtualQuery (memory_info.BaseAddress, &memory_info, sizeof (memory_info))) {
05248         switch (memory_info.State) {
05249         case MEM_FREE:
05250             *free += memory_info.RegionSize;
05251             break;
05252         case MEM_RESERVE:
05253             *reserved += memory_info.RegionSize;
05254             break;
05255         case MEM_COMMIT:
05256             *committed += memory_info.RegionSize;
05257             break;
05258         }
05259         memory_info.BaseAddress = (char *) memory_info.BaseAddress + memory_info.RegionSize;
05260     }
05261 }
05262 
05263 static int cpuinfo (int whole, unsigned long *kernel, unsigned long *user) {
05264     if (whole) {
05265         __int64 creation64, exit64, kernel64, user64;
05266         int rc = GetProcessTimes (GetCurrentProcess (), 
05267                                   (FILETIME *) &creation64,  
05268                                   (FILETIME *) &exit64, 
05269                                   (FILETIME *) &kernel64, 
05270                                   (FILETIME *) &user64);
05271         if (! rc) {
05272             *kernel = 0;
05273             *user = 0;
05274             return FALSE;
05275         } 
05276         *kernel = (unsigned long) (kernel64 / 10000);
05277         *user = (unsigned long) (user64 / 10000);
05278         return TRUE;
05279     } else {
05280         __int64 creation64, exit64, kernel64, user64;
05281         int rc = GetThreadTimes (GetCurrentThread (), 
05282                                  (FILETIME *) &creation64,  
05283                                  (FILETIME *) &exit64, 
05284                                  (FILETIME *) &kernel64, 
05285                                  (FILETIME *) &user64);
05286         if (! rc) {
05287             *kernel = 0;
05288             *user = 0;
05289             return FALSE;
05290         } 
05291         *kernel = (unsigned long) (kernel64 / 10000);
05292         *user = (unsigned long) (user64 / 10000);
05293         return TRUE;
05294     }
05295 }
05296 
05297 #endif /* WIN32 */
05298 
05299 /* ------------------------------------------------------------
05300 History:
05301 
05302     V2.7.0 Sun Mar 11 14:14:06 2001  Doug Lea  (dl at gee)
05303       * Introduce independent_comalloc and independent_calloc.
05304         Thanks to Michael Pachos for motivation and help.
05305       * Make optional .h file available
05306       * Allow > 2GB requests on 32bit systems.
05307       * new WIN32 sbrk, mmap, munmap, lock code from <Walter@GeNeSys-e.de>.
05308         Thanks also to Andreas Mueller <a.mueller at paradatec.de>,
05309         and Anonymous.
05310       * Allow override of MALLOC_ALIGNMENT (Thanks to Ruud Waij for 
05311         helping test this.)
05312       * memalign: check alignment arg
05313       * realloc: don't try to shift chunks backwards, since this
05314         leads to  more fragmentation in some programs and doesn't
05315         seem to help in any others.
05316       * Collect all cases in malloc requiring system memory into sYSMALLOc
05317       * Use mmap as backup to sbrk
05318       * Place all internal state in malloc_state
05319       * Introduce fastbins (although similar to 2.5.1)
05320       * Many minor tunings and cosmetic improvements
05321       * Introduce USE_PUBLIC_MALLOC_WRAPPERS, USE_MALLOC_LOCK 
05322       * Introduce MALLOC_FAILURE_ACTION, MORECORE_CONTIGUOUS
05323         Thanks to Tony E. Bennett <tbennett@nvidia.com> and others.
05324       * Include errno.h to support default failure action.
05325 
05326     V2.6.6 Sun Dec  5 07:42:19 1999  Doug Lea  (dl at gee)
05327       * return null for negative arguments
05328       * Added Several WIN32 cleanups from Martin C. Fong <mcfong at yahoo.com>
05329          * Add 'LACKS_SYS_PARAM_H' for those systems without 'sys/param.h'
05330           (e.g. WIN32 platforms)
05331          * Cleanup header file inclusion for WIN32 platforms
05332          * Cleanup code to avoid Microsoft Visual C++ compiler complaints
05333          * Add 'USE_DL_PREFIX' to quickly allow co-existence with existing
05334            memory allocation routines
05335          * Set 'malloc_getpagesize' for WIN32 platforms (needs more work)
05336          * Use 'assert' rather than 'ASSERT' in WIN32 code to conform to
05337            usage of 'assert' in non-WIN32 code
05338          * Improve WIN32 'sbrk()' emulation's 'findRegion()' routine to
05339            avoid infinite loop
05340       * Always call 'fREe()' rather than 'free()'
05341 
05342     V2.6.5 Wed Jun 17 15:57:31 1998  Doug Lea  (dl at gee)
05343       * Fixed ordering problem with boundary-stamping
05344 
05345     V2.6.3 Sun May 19 08:17:58 1996  Doug Lea  (dl at gee)
05346       * Added pvalloc, as recommended by H.J. Liu
05347       * Added 64bit pointer support mainly from Wolfram Gloger
05348       * Added anonymously donated WIN32 sbrk emulation
05349       * Malloc, calloc, getpagesize: add optimizations from Raymond Nijssen
05350       * malloc_extend_top: fix mask error that caused wastage after
05351         foreign sbrks
05352       * Add linux mremap support code from HJ Liu
05353 
05354     V2.6.2 Tue Dec  5 06:52:55 1995  Doug Lea  (dl at gee)
05355       * Integrated most documentation with the code.
05356       * Add support for mmap, with help from
05357         Wolfram Gloger (Gloger@lrz.uni-muenchen.de).
05358       * Use last_remainder in more cases.
05359       * Pack bins using idea from  colin@nyx10.cs.du.edu
05360       * Use ordered bins instead of best-fit threshhold
05361       * Eliminate block-local decls to simplify tracing and debugging.
05362       * Support another case of realloc via move into top
05363       * Fix error occuring when initial sbrk_base not word-aligned.
05364       * Rely on page size for units instead of SBRK_UNIT to
05365         avoid surprises about sbrk alignment conventions.
05366       * Add mallinfo, mallopt. Thanks to Raymond Nijssen
05367         (raymond@es.ele.tue.nl) for the suggestion.
05368       * Add `pad' argument to malloc_trim and top_pad mallopt parameter.
05369       * More precautions for cases where other routines call sbrk,
05370         courtesy of Wolfram Gloger (Gloger@lrz.uni-muenchen.de).
05371       * Added macros etc., allowing use in linux libc from
05372         H.J. Lu (hjl@gnu.ai.mit.edu)
05373       * Inverted this history list
05374 
05375     V2.6.1 Sat Dec  2 14:10:57 1995  Doug Lea  (dl at gee)
05376       * Re-tuned and fixed to behave more nicely with V2.6.0 changes.
05377       * Removed all preallocation code since under current scheme
05378         the work required to undo bad preallocations exceeds
05379         the work saved in good cases for most test programs.
05380       * No longer use return list or unconsolidated bins since
05381         no scheme using them consistently outperforms those that don't
05382         given above changes.
05383       * Use best fit for very large chunks to prevent some worst-cases.
05384       * Added some support for debugging
05385 
05386     V2.6.0 Sat Nov  4 07:05:23 1995  Doug Lea  (dl at gee)
05387       * Removed footers when chunks are in use. Thanks to
05388         Paul Wilson (wilson@cs.texas.edu) for the suggestion.
05389 
05390     V2.5.4 Wed Nov  1 07:54:51 1995  Doug Lea  (dl at gee)
05391       * Added malloc_trim, with help from Wolfram Gloger
05392         (wmglo@Dent.MED.Uni-Muenchen.DE).
05393 
05394     V2.5.3 Tue Apr 26 10:16:01 1994  Doug Lea  (dl at g)
05395 
05396     V2.5.2 Tue Apr  5 16:20:40 1994  Doug Lea  (dl at g)
05397       * realloc: try to expand in both directions
05398       * malloc: swap order of clean-bin strategy;
05399       * realloc: only conditionally expand backwards
05400       * Try not to scavenge used bins
05401       * Use bin counts as a guide to preallocation
05402       * Occasionally bin return list chunks in first scan
05403       * Add a few optimizations from colin@nyx10.cs.du.edu
05404 
05405     V2.5.1 Sat Aug 14 15:40:43 1993  Doug Lea  (dl at g)
05406       * faster bin computation & slightly different binning
05407       * merged all consolidations to one part of malloc proper
05408          (eliminating old malloc_find_space & malloc_clean_bin)
05409       * Scan 2 returns chunks (not just 1)
05410       * Propagate failure in realloc if malloc returns 0
05411       * Add stuff to allow compilation on non-ANSI compilers
05412           from kpv@research.att.com
05413 
05414     V2.5 Sat Aug  7 07:41:59 1993  Doug Lea  (dl at g.oswego.edu)
05415       * removed potential for odd address access in prev_chunk
05416       * removed dependency on getpagesize.h
05417       * misc cosmetics and a bit more internal documentation
05418       * anticosmetics: mangled names in macros to evade debugger strangeness
05419       * tested on sparc, hp-700, dec-mips, rs6000
05420           with gcc & native cc (hp, dec only) allowing
05421           Detlefs & Zorn comparison study (in SIGPLAN Notices.)
05422 
05423     Trial version Fri Aug 28 13:14:29 1992  Doug Lea  (dl at g.oswego.edu)
05424       * Based loosely on libg++-1.2X malloc. (It retains some of the overall
05425          structure of old version,  but most details differ.)
05426 
05427 */

Generated on Sat Apr 23 11:43:35 2011 for Mnemosyne by  doxygen 1.4.7