Stackdb
Stackdb is a stackable, multi-target and -level source debugger and memory forensics library.
 All Data Structures Namespaces Files Functions Variables Typedefs Enumerations Enumerator Macros Pages
dlmalloc.c
Go to the documentation of this file.
1 /*
2  This is a version (aka dlmalloc) of malloc/free/realloc written by
3  Doug Lea and released to the public domain, as explained at
4  http://creativecommons.org/publicdomain/zero/1.0/ Send questions,
5  comments, complaints, performance data, etc to dl@cs.oswego.edu
6 
7 * Version 2.8.6 Wed Aug 29 06:57:58 2012 Doug Lea
8  Note: There may be an updated version of this malloc obtainable at
9  ftp://gee.cs.oswego.edu/pub/misc/malloc.c
10  Check before installing!
11 
12 * Quickstart
13 
14  This library is all in one file to simplify the most common usage:
15  ftp it, compile it (-O3), and link it into another program. All of
16  the compile-time options default to reasonable values for use on
17  most platforms. You might later want to step through various
18  compile-time and dynamic tuning options.
19 
20  For convenience, an include file for code using this malloc is at:
21  ftp://gee.cs.oswego.edu/pub/misc/malloc-2.8.6.h
22  You don't really need this .h file unless you call functions not
23  defined in your system include files. The .h file contains only the
24  excerpts from this file needed for using this malloc on ANSI C/C++
25  systems, so long as you haven't changed compile-time options about
26  naming and tuning parameters. If you do, then you can create your
27  own malloc.h that does include all settings by cutting at the point
28  indicated below. Note that you may already by default be using a C
29  library containing a malloc that is based on some version of this
30  malloc (for example in linux). You might still want to use the one
31  in this file to customize settings or to avoid overheads associated
32  with library versions.
33 
34 * Vital statistics:
35 
36  Supported pointer/size_t representation: 4 or 8 bytes
37  size_t MUST be an unsigned type of the same width as
38  pointers. (If you are using an ancient system that declares
39  size_t as a signed type, or need it to be a different width
40  than pointers, you can use a previous release of this malloc
41  (e.g. 2.7.2) supporting these.)
42 
43  Alignment: 8 bytes (minimum)
44  This suffices for nearly all current machines and C compilers.
45  However, you can define MALLOC_ALIGNMENT to be wider than this
46  if necessary (up to 128bytes), at the expense of using more space.
47 
48  Minimum overhead per allocated chunk: 4 or 8 bytes (if 4byte sizes)
49  8 or 16 bytes (if 8byte sizes)
50  Each malloced chunk has a hidden word of overhead holding size
51  and status information, and additional cross-check word
52  if FOOTERS is defined.
53 
54  Minimum allocated size: 4-byte ptrs: 16 bytes (including overhead)
55  8-byte ptrs: 32 bytes (including overhead)
56 
57  Even a request for zero bytes (i.e., malloc(0)) returns a
58  pointer to something of the minimum allocatable size.
59  The maximum overhead wastage (i.e., number of extra bytes
60  allocated than were requested in malloc) is less than or equal
61  to the minimum size, except for requests >= mmap_threshold that
62  are serviced via mmap(), where the worst case wastage is about
63  32 bytes plus the remainder from a system page (the minimal
64  mmap unit); typically 4096 or 8192 bytes.
65 
66  Security: static-safe; optionally more or less
67  The "security" of malloc refers to the ability of malicious
68  code to accentuate the effects of errors (for example, freeing
69  space that is not currently malloc'ed or overwriting past the
70  ends of chunks) in code that calls malloc. This malloc
71  guarantees not to modify any memory locations below the base of
72  heap, i.e., static variables, even in the presence of usage
73  errors. The routines additionally detect most improper frees
74  and reallocs. All this holds as long as the static bookkeeping
75  for malloc itself is not corrupted by some other means. This
76  is only one aspect of security -- these checks do not, and
77  cannot, detect all possible programming errors.
78 
79  If FOOTERS is defined nonzero, then each allocated chunk
80  carries an additional check word to verify that it was malloced
81  from its space. These check words are the same within each
82  execution of a program using malloc, but differ across
83  executions, so externally crafted fake chunks cannot be
84  freed. This improves security by rejecting frees/reallocs that
85  could corrupt heap memory, in addition to the checks preventing
86  writes to statics that are always on. This may further improve
87  security at the expense of time and space overhead. (Note that
88  FOOTERS may also be worth using with MSPACES.)
89 
90  By default detected errors cause the program to abort (calling
91  "abort()"). You can override this to instead proceed past
92  errors by defining PROCEED_ON_ERROR. In this case, a bad free
93  has no effect, and a malloc that encounters a bad address
94  caused by user overwrites will ignore the bad address by
95  dropping pointers and indices to all known memory. This may
96  be appropriate for programs that should continue if at all
97  possible in the face of programming errors, although they may
98  run out of memory because dropped memory is never reclaimed.
99 
100  If you don't like either of these options, you can define
101  CORRUPTION_ERROR_ACTION and USAGE_ERROR_ACTION to do anything
102  else. And if if you are sure that your program using malloc has
103  no errors or vulnerabilities, you can define INSECURE to 1,
104  which might (or might not) provide a small performance improvement.
105 
106  It is also possible to limit the maximum total allocatable
107  space, using malloc_set_footprint_limit. This is not
108  designed as a security feature in itself (calls to set limits
109  are not screened or privileged), but may be useful as one
110  aspect of a secure implementation.
111 
112  Thread-safety: NOT thread-safe unless USE_LOCKS defined non-zero
113  When USE_LOCKS is defined, each public call to malloc, free,
114  etc is surrounded with a lock. By default, this uses a plain
115  pthread mutex, win32 critical section, or a spin-lock if if
116  available for the platform and not disabled by setting
117  USE_SPIN_LOCKS=0. However, if USE_RECURSIVE_LOCKS is defined,
118  recursive versions are used instead (which are not required for
119  base functionality but may be needed in layered extensions).
120  Using a global lock is not especially fast, and can be a major
121  bottleneck. It is designed only to provide minimal protection
122  in concurrent environments, and to provide a basis for
123  extensions. If you are using malloc in a concurrent program,
124  consider instead using nedmalloc
125  (http://www.nedprod.com/programs/portable/nedmalloc/) or
126  ptmalloc (See http://www.malloc.de), which are derived from
127  versions of this malloc.
128 
129  System requirements: Any combination of MORECORE and/or MMAP/MUNMAP
130  This malloc can use unix sbrk or any emulation (invoked using
131  the CALL_MORECORE macro) and/or mmap/munmap or any emulation
132  (invoked using CALL_MMAP/CALL_MUNMAP) to get and release system
133  memory. On most unix systems, it tends to work best if both
134  MORECORE and MMAP are enabled. On Win32, it uses emulations
135  based on VirtualAlloc. It also uses common C library functions
136  like memset.
137 
138  Compliance: I believe it is compliant with the Single Unix Specification
139  (See http://www.unix.org). Also SVID/XPG, ANSI C, and probably
140  others as well.
141 
142 * Overview of algorithms
143 
144  This is not the fastest, most space-conserving, most portable, or
145  most tunable malloc ever written. However it is among the fastest
146  while also being among the most space-conserving, portable and
147  tunable. Consistent balance across these factors results in a good
148  general-purpose allocator for malloc-intensive programs.
149 
150  In most ways, this malloc is a best-fit allocator. Generally, it
151  chooses the best-fitting existing chunk for a request, with ties
152  broken in approximately least-recently-used order. (This strategy
153  normally maintains low fragmentation.) However, for requests less
154  than 256bytes, it deviates from best-fit when there is not an
155  exactly fitting available chunk by preferring to use space adjacent
156  to that used for the previous small request, as well as by breaking
157  ties in approximately most-recently-used order. (These enhance
158  locality of series of small allocations.) And for very large requests
159  (>= 256Kb by default), it relies on system memory mapping
160  facilities, if supported. (This helps avoid carrying around and
161  possibly fragmenting memory used only for large chunks.)
162 
163  All operations (except malloc_stats and mallinfo) have execution
164  times that are bounded by a constant factor of the number of bits in
165  a size_t, not counting any clearing in calloc or copying in realloc,
166  or actions surrounding MORECORE and MMAP that have times
167  proportional to the number of non-contiguous regions returned by
168  system allocation routines, which is often just 1. In real-time
169  applications, you can optionally suppress segment traversals using
170  NO_SEGMENT_TRAVERSAL, which assures bounded execution even when
171  system allocators return non-contiguous spaces, at the typical
172  expense of carrying around more memory and increased fragmentation.
173 
174  The implementation is not very modular and seriously overuses
175  macros. Perhaps someday all C compilers will do as good a job
176  inlining modular code as can now be done by brute-force expansion,
177  but now, enough of them seem not to.
178 
179  Some compilers issue a lot of warnings about code that is
180  dead/unreachable only on some platforms, and also about intentional
181  uses of negation on unsigned types. All known cases of each can be
182  ignored.
183 
184  For a longer but out of date high-level description, see
185  http://gee.cs.oswego.edu/dl/html/malloc.html
186 
187 * MSPACES
188  If MSPACES is defined, then in addition to malloc, free, etc.,
189  this file also defines mspace_malloc, mspace_free, etc. These
190  are versions of malloc routines that take an "mspace" argument
191  obtained using create_mspace, to control all internal bookkeeping.
192  If ONLY_MSPACES is defined, only these versions are compiled.
193  So if you would like to use this allocator for only some allocations,
194  and your system malloc for others, you can compile with
195  ONLY_MSPACES and then do something like...
196  static mspace mymspace = create_mspace(0,0); // for example
197  #define mymalloc(bytes) mspace_malloc(mymspace, bytes)
198 
199  (Note: If you only need one instance of an mspace, you can instead
200  use "USE_DL_PREFIX" to relabel the global malloc.)
201 
202  You can similarly create thread-local allocators by storing
203  mspaces as thread-locals. For example:
204  static __thread mspace tlms = 0;
205  void* tlmalloc(size_t bytes) {
206  if (tlms == 0) tlms = create_mspace(0, 0);
207  return mspace_malloc(tlms, bytes);
208  }
209  void tlfree(void* mem) { mspace_free(tlms, mem); }
210 
211  Unless FOOTERS is defined, each mspace is completely independent.
212  You cannot allocate from one and free to another (although
213  conformance is only weakly checked, so usage errors are not always
214  caught). If FOOTERS is defined, then each chunk carries around a tag
215  indicating its originating mspace, and frees are directed to their
216  originating spaces. Normally, this requires use of locks.
217 
218  ------------------------- Compile-time options ---------------------------
219 
220 Be careful in setting #define values for numerical constants of type
221 size_t. On some systems, literal values are not automatically extended
222 to size_t precision unless they are explicitly casted. You can also
223 use the symbolic values MAX_SIZE_T, SIZE_T_ONE, etc below.
224 
225 WIN32 default: defined if _WIN32 defined
226  Defining WIN32 sets up defaults for MS environment and compilers.
227  Otherwise defaults are for unix. Beware that there seem to be some
228  cases where this malloc might not be a pure drop-in replacement for
229  Win32 malloc: Random-looking failures from Win32 GDI API's (eg;
230  SetDIBits()) may be due to bugs in some video driver implementations
231  when pixel buffers are malloc()ed, and the region spans more than
232  one VirtualAlloc()ed region. Because dlmalloc uses a small (64Kb)
233  default granularity, pixel buffers may straddle virtual allocation
234  regions more often than when using the Microsoft allocator. You can
235  avoid this by using VirtualAlloc() and VirtualFree() for all pixel
236  buffers rather than using malloc(). If this is not possible,
237  recompile this malloc with a larger DEFAULT_GRANULARITY. Note:
238  in cases where MSC and gcc (cygwin) are known to differ on WIN32,
239  conditions use _MSC_VER to distinguish them.
240 
241 DLMALLOC_EXPORT default: extern
242  Defines how public APIs are declared. If you want to export via a
243  Windows DLL, you might define this as
244  #define DLMALLOC_EXPORT extern __declspec(dllexport)
245  If you want a POSIX ELF shared object, you might use
246  #define DLMALLOC_EXPORT extern __attribute__((visibility("default")))
247 
248 MALLOC_ALIGNMENT default: (size_t)(2 * sizeof(void *))
249  Controls the minimum alignment for malloc'ed chunks. It must be a
250  power of two and at least 8, even on machines for which smaller
251  alignments would suffice. It may be defined as larger than this
252  though. Note however that code and data structures are optimized for
253  the case of 8-byte alignment.
254 
255 MSPACES default: 0 (false)
256  If true, compile in support for independent allocation spaces.
257  This is only supported if HAVE_MMAP is true.
258 
259 ONLY_MSPACES default: 0 (false)
260  If true, only compile in mspace versions, not regular versions.
261 
262 USE_LOCKS default: 0 (false)
263  Causes each call to each public routine to be surrounded with
264  pthread or WIN32 mutex lock/unlock. (If set true, this can be
265  overridden on a per-mspace basis for mspace versions.) If set to a
266  non-zero value other than 1, locks are used, but their
267  implementation is left out, so lock functions must be supplied manually,
268  as described below.
269 
270 USE_SPIN_LOCKS default: 1 iff USE_LOCKS and spin locks available
271  If true, uses custom spin locks for locking. This is currently
272  supported only gcc >= 4.1, older gccs on x86 platforms, and recent
273  MS compilers. Otherwise, posix locks or win32 critical sections are
274  used.
275 
276 USE_RECURSIVE_LOCKS default: not defined
277  If defined nonzero, uses recursive (aka reentrant) locks, otherwise
278  uses plain mutexes. This is not required for malloc proper, but may
279  be needed for layered allocators such as nedmalloc.
280 
281 LOCK_AT_FORK default: not defined
282  If defined nonzero, performs pthread_atfork upon initialization
283  to initialize child lock while holding parent lock. The implementation
284  assumes that pthread locks (not custom locks) are being used. In other
285  cases, you may need to customize the implementation.
286 
287 FOOTERS default: 0
288  If true, provide extra checking and dispatching by placing
289  information in the footers of allocated chunks. This adds
290  space and time overhead.
291 
292 INSECURE default: 0
293  If true, omit checks for usage errors and heap space overwrites.
294 
295 USE_DL_PREFIX default: NOT defined
296  Causes compiler to prefix all public routines with the string 'dl'.
297  This can be useful when you only want to use this malloc in one part
298  of a program, using your regular system malloc elsewhere.
299 
300 MALLOC_INSPECT_ALL default: NOT defined
301  If defined, compiles malloc_inspect_all and mspace_inspect_all, that
302  perform traversal of all heap space. Unless access to these
303  functions is otherwise restricted, you probably do not want to
304  include them in secure implementations.
305 
306 ABORT default: defined as abort()
307  Defines how to abort on failed checks. On most systems, a failed
308  check cannot die with an "assert" or even print an informative
309  message, because the underlying print routines in turn call malloc,
310  which will fail again. Generally, the best policy is to simply call
311  abort(). It's not very useful to do more than this because many
312  errors due to overwriting will show up as address faults (null, odd
313  addresses etc) rather than malloc-triggered checks, so will also
314  abort. Also, most compilers know that abort() does not return, so
315  can better optimize code conditionally calling it.
316 
317 PROCEED_ON_ERROR default: defined as 0 (false)
318  Controls whether detected bad addresses cause them to bypassed
319  rather than aborting. If set, detected bad arguments to free and
320  realloc are ignored. And all bookkeeping information is zeroed out
321  upon a detected overwrite of freed heap space, thus losing the
322  ability to ever return it from malloc again, but enabling the
323  application to proceed. If PROCEED_ON_ERROR is defined, the
324  static variable malloc_corruption_error_count is compiled in
325  and can be examined to see if errors have occurred. This option
326  generates slower code than the default abort policy.
327 
328 DEBUG default: NOT defined
329  The DEBUG setting is mainly intended for people trying to modify
330  this code or diagnose problems when porting to new platforms.
331  However, it may also be able to better isolate user errors than just
332  using runtime checks. The assertions in the check routines spell
333  out in more detail the assumptions and invariants underlying the
334  algorithms. The checking is fairly extensive, and will slow down
335  execution noticeably. Calling malloc_stats or mallinfo with DEBUG
336  set will attempt to check every non-mmapped allocated and free chunk
337  in the course of computing the summaries.
338 
339 ABORT_ON_ASSERT_FAILURE default: defined as 1 (true)
340  Debugging assertion failures can be nearly impossible if your
341  version of the assert macro causes malloc to be called, which will
342  lead to a cascade of further failures, blowing the runtime stack.
343  ABORT_ON_ASSERT_FAILURE cause assertions failures to call abort(),
344  which will usually make debugging easier.
345 
346 MALLOC_FAILURE_ACTION default: sets errno to ENOMEM, or no-op on win32
347  The action to take before "return 0" when malloc fails to be able to
348  return memory because there is none available.
349 
350 HAVE_MORECORE default: 1 (true) unless win32 or ONLY_MSPACES
351  True if this system supports sbrk or an emulation of it.
352 
353 MORECORE default: sbrk
354  The name of the sbrk-style system routine to call to obtain more
355  memory. See below for guidance on writing custom MORECORE
356  functions. The type of the argument to sbrk/MORECORE varies across
357  systems. It cannot be size_t, because it supports negative
358  arguments, so it is normally the signed type of the same width as
359  size_t (sometimes declared as "intptr_t"). It doesn't much matter
360  though. Internally, we only call it with arguments less than half
361  the max value of a size_t, which should work across all reasonable
362  possibilities, although sometimes generating compiler warnings.
363 
364 MORECORE_CONTIGUOUS default: 1 (true) if HAVE_MORECORE
365  If true, take advantage of fact that consecutive calls to MORECORE
366  with positive arguments always return contiguous increasing
367  addresses. This is true of unix sbrk. It does not hurt too much to
368  set it true anyway, since malloc copes with non-contiguities.
369  Setting it false when definitely non-contiguous saves time
370  and possibly wasted space it would take to discover this though.
371 
372 MORECORE_CANNOT_TRIM default: NOT defined
373  True if MORECORE cannot release space back to the system when given
374  negative arguments. This is generally necessary only if you are
375  using a hand-crafted MORECORE function that cannot handle negative
376  arguments.
377 
378 NO_SEGMENT_TRAVERSAL default: 0
379  If non-zero, suppresses traversals of memory segments
380  returned by either MORECORE or CALL_MMAP. This disables
381  merging of segments that are contiguous, and selectively
382  releasing them to the OS if unused, but bounds execution times.
383 
384 HAVE_MMAP default: 1 (true)
385  True if this system supports mmap or an emulation of it. If so, and
386  HAVE_MORECORE is not true, MMAP is used for all system
387  allocation. If set and HAVE_MORECORE is true as well, MMAP is
388  primarily used to directly allocate very large blocks. It is also
389  used as a backup strategy in cases where MORECORE fails to provide
390  space from system. Note: A single call to MUNMAP is assumed to be
391  able to unmap memory that may have be allocated using multiple calls
392  to MMAP, so long as they are adjacent.
393 
394 HAVE_MREMAP default: 1 on linux, else 0
395  If true realloc() uses mremap() to re-allocate large blocks and
396  extend or shrink allocation spaces.
397 
398 MMAP_CLEARS default: 1 except on WINCE.
399  True if mmap clears memory so calloc doesn't need to. This is true
400  for standard unix mmap using /dev/zero and on WIN32 except for WINCE.
401 
402 USE_BUILTIN_FFS default: 0 (i.e., not used)
403  Causes malloc to use the builtin ffs() function to compute indices.
404  Some compilers may recognize and intrinsify ffs to be faster than the
405  supplied C version. Also, the case of x86 using gcc is special-cased
406  to an asm instruction, so is already as fast as it can be, and so
407  this setting has no effect. Similarly for Win32 under recent MS compilers.
408  (On most x86s, the asm version is only slightly faster than the C version.)
409 
410 malloc_getpagesize default: derive from system includes, or 4096.
411  The system page size. To the extent possible, this malloc manages
412  memory from the system in page-size units. This may be (and
413  usually is) a function rather than a constant. This is ignored
414  if WIN32, where page size is determined using getSystemInfo during
415  initialization.
416 
417 USE_DEV_RANDOM default: 0 (i.e., not used)
418  Causes malloc to use /dev/random to initialize secure magic seed for
419  stamping footers. Otherwise, the current time is used.
420 
421 NO_MALLINFO default: 0
422  If defined, don't compile "mallinfo". This can be a simple way
423  of dealing with mismatches between system declarations and
424  those in this file.
425 
426 MALLINFO_FIELD_TYPE default: size_t
427  The type of the fields in the mallinfo struct. This was originally
428  defined as "int" in SVID etc, but is more usefully defined as
429  size_t. The value is used only if HAVE_USR_INCLUDE_MALLOC_H is not set
430 
431 NO_MALLOC_STATS default: 0
432  If defined, don't compile "malloc_stats". This avoids calls to
433  fprintf and bringing in stdio dependencies you might not want.
434 
435 REALLOC_ZERO_BYTES_FREES default: not defined
436  This should be set if a call to realloc with zero bytes should
437  be the same as a call to free. Some people think it should. Otherwise,
438  since this malloc returns a unique pointer for malloc(0), so does
439  realloc(p, 0).
440 
441 LACKS_UNISTD_H, LACKS_FCNTL_H, LACKS_SYS_PARAM_H, LACKS_SYS_MMAN_H
442 LACKS_STRINGS_H, LACKS_STRING_H, LACKS_SYS_TYPES_H, LACKS_ERRNO_H
443 LACKS_STDLIB_H LACKS_SCHED_H LACKS_TIME_H default: NOT defined unless on WIN32
444  Define these if your system does not have these header files.
445  You might need to manually insert some of the declarations they provide.
446 
447 DEFAULT_GRANULARITY default: page size if MORECORE_CONTIGUOUS,
448  system_info.dwAllocationGranularity in WIN32,
449  otherwise 64K.
450  Also settable using mallopt(M_GRANULARITY, x)
451  The unit for allocating and deallocating memory from the system. On
452  most systems with contiguous MORECORE, there is no reason to
453  make this more than a page. However, systems with MMAP tend to
454  either require or encourage larger granularities. You can increase
455  this value to prevent system allocation functions to be called so
456  often, especially if they are slow. The value must be at least one
457  page and must be a power of two. Setting to 0 causes initialization
458  to either page size or win32 region size. (Note: In previous
459  versions of malloc, the equivalent of this option was called
460  "TOP_PAD")
461 
462 DEFAULT_TRIM_THRESHOLD default: 2MB
463  Also settable using mallopt(M_TRIM_THRESHOLD, x)
464  The maximum amount of unused top-most memory to keep before
465  releasing via malloc_trim in free(). Automatic trimming is mainly
466  useful in long-lived programs using contiguous MORECORE. Because
467  trimming via sbrk can be slow on some systems, and can sometimes be
468  wasteful (in cases where programs immediately afterward allocate
469  more large chunks) the value should be high enough so that your
470  overall system performance would improve by releasing this much
471  memory. As a rough guide, you might set to a value close to the
472  average size of a process (program) running on your system.
473  Releasing this much memory would allow such a process to run in
474  memory. Generally, it is worth tuning trim thresholds when a
475  program undergoes phases where several large chunks are allocated
476  and released in ways that can reuse each other's storage, perhaps
477  mixed with phases where there are no such chunks at all. The trim
478  value must be greater than page size to have any useful effect. To
479  disable trimming completely, you can set to MAX_SIZE_T. Note that the trick
480  some people use of mallocing a huge space and then freeing it at
481  program startup, in an attempt to reserve system memory, doesn't
482  have the intended effect under automatic trimming, since that memory
483  will immediately be returned to the system.
484 
485 DEFAULT_MMAP_THRESHOLD default: 256K
486  Also settable using mallopt(M_MMAP_THRESHOLD, x)
487  The request size threshold for using MMAP to directly service a
488  request. Requests of at least this size that cannot be allocated
489  using already-existing space will be serviced via mmap. (If enough
490  normal freed space already exists it is used instead.) Using mmap
491  segregates relatively large chunks of memory so that they can be
492  individually obtained and released from the host system. A request
493  serviced through mmap is never reused by any other request (at least
494  not directly; the system may just so happen to remap successive
495  requests to the same locations). Segregating space in this way has
496  the benefits that: Mmapped space can always be individually released
497  back to the system, which helps keep the system level memory demands
498  of a long-lived program low. Also, mapped memory doesn't become
499  `locked' between other chunks, as can happen with normally allocated
500  chunks, which means that even trimming via malloc_trim would not
501  release them. However, it has the disadvantage that the space
502  cannot be reclaimed, consolidated, and then used to service later
503  requests, as happens with normal chunks. The advantages of mmap
504  nearly always outweigh disadvantages for "large" chunks, but the
505  value of "large" may vary across systems. The default is an
506  empirically derived value that works well in most systems. You can
507  disable mmap by setting to MAX_SIZE_T.
508 
509 MAX_RELEASE_CHECK_RATE default: 4095 unless not HAVE_MMAP
510  The number of consolidated frees between checks to release
511  unused segments when freeing. When using non-contiguous segments,
512  especially with multiple mspaces, checking only for topmost space
513  doesn't always suffice to trigger trimming. To compensate for this,
514  free() will, with a period of MAX_RELEASE_CHECK_RATE (or the
515  current number of segments, if greater) try to release unused
516  segments to the OS when freeing chunks that result in
517  consolidation. The best value for this parameter is a compromise
518  between slowing down frees with relatively costly checks that
519  rarely trigger versus holding on to unused memory. To effectively
520  disable, set to MAX_SIZE_T. This may lead to a very slight speed
521  improvement at the expense of carrying around more memory.
522 */
523 
524 /* Version identifier to allow people to support multiple versions */
525 #ifndef DLMALLOC_VERSION
526 #define DLMALLOC_VERSION 20806
527 #endif /* DLMALLOC_VERSION */
528 
529 #ifndef DLMALLOC_EXPORT
530 #define DLMALLOC_EXPORT extern
531 #endif
532 
533 #ifndef WIN32
534 #ifdef _WIN32
535 #define WIN32 1
536 #endif /* _WIN32 */
537 #ifdef _WIN32_WCE
538 #define LACKS_FCNTL_H
539 #define WIN32 1
540 #endif /* _WIN32_WCE */
541 #endif /* WIN32 */
542 #ifdef WIN32
543 #define WIN32_LEAN_AND_MEAN
544 #include <windows.h>
545 #include <tchar.h>
546 #define HAVE_MMAP 1
547 #define HAVE_MORECORE 0
548 #define LACKS_UNISTD_H
549 #define LACKS_SYS_PARAM_H
550 #define LACKS_SYS_MMAN_H
551 #define LACKS_STRING_H
552 #define LACKS_STRINGS_H
553 #define LACKS_SYS_TYPES_H
554 #define LACKS_ERRNO_H
555 #define LACKS_SCHED_H
556 #ifndef MALLOC_FAILURE_ACTION
557 #define MALLOC_FAILURE_ACTION
558 #endif /* MALLOC_FAILURE_ACTION */
559 #ifndef MMAP_CLEARS
560 #ifdef _WIN32_WCE /* WINCE reportedly does not clear */
561 #define MMAP_CLEARS 0
562 #else
563 #define MMAP_CLEARS 1
564 #endif /* _WIN32_WCE */
565 #endif /*MMAP_CLEARS */
566 #endif /* WIN32 */
567 
568 #if defined(DARWIN) || defined(_DARWIN)
569 /* Mac OSX docs advise not to use sbrk; it seems better to use mmap */
570 #ifndef HAVE_MORECORE
571 #define HAVE_MORECORE 0
572 #define HAVE_MMAP 1
573 /* OSX allocators provide 16 byte alignment */
574 #ifndef MALLOC_ALIGNMENT
575 #define MALLOC_ALIGNMENT ((size_t)16U)
576 #endif
577 #endif /* HAVE_MORECORE */
578 #endif /* DARWIN */
579 
580 #ifndef LACKS_SYS_TYPES_H
581 #include <sys/types.h> /* For size_t */
582 #endif /* LACKS_SYS_TYPES_H */
583 
584 /* The maximum possible size_t value has all bits set */
585 #define MAX_SIZE_T (~(size_t)0)
586 
587 #ifndef USE_LOCKS /* ensure true if spin or recursive locks set */
588 #define USE_LOCKS ((defined(USE_SPIN_LOCKS) && USE_SPIN_LOCKS != 0) || \
589  (defined(USE_RECURSIVE_LOCKS) && USE_RECURSIVE_LOCKS != 0))
590 #endif /* USE_LOCKS */
591 
592 #if USE_LOCKS /* Spin locks for gcc >= 4.1, older gcc on x86, MSC >= 1310 */
593 #if ((defined(__GNUC__) && \
594  ((__GNUC__ > 4 || (__GNUC__ == 4 && __GNUC_MINOR__ >= 1)) || \
595  defined(__i386__) || defined(__x86_64__))) || \
596  (defined(_MSC_VER) && _MSC_VER>=1310))
597 #ifndef USE_SPIN_LOCKS
598 #define USE_SPIN_LOCKS 1
599 #endif /* USE_SPIN_LOCKS */
600 #elif USE_SPIN_LOCKS
601 #error "USE_SPIN_LOCKS defined without implementation"
602 #endif /* ... locks available... */
603 #elif !defined(USE_SPIN_LOCKS)
604 #define USE_SPIN_LOCKS 0
605 #endif /* USE_LOCKS */
606 
607 #ifndef ONLY_MSPACES
608 #define ONLY_MSPACES 0
609 #endif /* ONLY_MSPACES */
610 #ifndef MSPACES
611 #if ONLY_MSPACES
612 #define MSPACES 1
613 #else /* ONLY_MSPACES */
614 #define MSPACES 0
615 #endif /* ONLY_MSPACES */
616 #endif /* MSPACES */
617 #ifndef MALLOC_ALIGNMENT
618 #define MALLOC_ALIGNMENT ((size_t)(2 * sizeof(void *)))
619 #endif /* MALLOC_ALIGNMENT */
620 #ifndef FOOTERS
621 #define FOOTERS 0
622 #endif /* FOOTERS */
623 #ifndef ABORT
624 #define ABORT abort()
625 #endif /* ABORT */
626 #ifndef ABORT_ON_ASSERT_FAILURE
627 #define ABORT_ON_ASSERT_FAILURE 1
628 #endif /* ABORT_ON_ASSERT_FAILURE */
629 #ifndef PROCEED_ON_ERROR
630 #define PROCEED_ON_ERROR 0
631 #endif /* PROCEED_ON_ERROR */
632 
633 #ifndef INSECURE
634 #define INSECURE 0
635 #endif /* INSECURE */
636 #ifndef MALLOC_INSPECT_ALL
637 #define MALLOC_INSPECT_ALL 0
638 #endif /* MALLOC_INSPECT_ALL */
639 #ifndef HAVE_MMAP
640 #define HAVE_MMAP 1
641 #endif /* HAVE_MMAP */
642 #ifndef MMAP_CLEARS
643 #define MMAP_CLEARS 1
644 #endif /* MMAP_CLEARS */
645 #ifndef HAVE_MREMAP
646 #ifdef linux
647 #define HAVE_MREMAP 1
648 #define _GNU_SOURCE /* Turns on mremap() definition */
649 #else /* linux */
650 #define HAVE_MREMAP 0
651 #endif /* linux */
652 #endif /* HAVE_MREMAP */
653 #ifndef MALLOC_FAILURE_ACTION
654 #define MALLOC_FAILURE_ACTION errno = ENOMEM;
655 #endif /* MALLOC_FAILURE_ACTION */
656 #ifndef HAVE_MORECORE
657 #if ONLY_MSPACES
658 #define HAVE_MORECORE 0
659 #else /* ONLY_MSPACES */
660 #define HAVE_MORECORE 1
661 #endif /* ONLY_MSPACES */
662 #endif /* HAVE_MORECORE */
663 #if !HAVE_MORECORE
664 #define MORECORE_CONTIGUOUS 0
665 #else /* !HAVE_MORECORE */
666 #define MORECORE_DEFAULT sbrk
667 #ifndef MORECORE_CONTIGUOUS
668 #define MORECORE_CONTIGUOUS 1
669 #endif /* MORECORE_CONTIGUOUS */
670 #endif /* HAVE_MORECORE */
671 #ifndef DEFAULT_GRANULARITY
672 #if (MORECORE_CONTIGUOUS || defined(WIN32))
673 #define DEFAULT_GRANULARITY (0) /* 0 means to compute in init_mparams */
674 #else /* MORECORE_CONTIGUOUS */
675 #define DEFAULT_GRANULARITY ((size_t)64U * (size_t)1024U)
676 #endif /* MORECORE_CONTIGUOUS */
677 #endif /* DEFAULT_GRANULARITY */
678 #ifndef DEFAULT_TRIM_THRESHOLD
679 #ifndef MORECORE_CANNOT_TRIM
680 #define DEFAULT_TRIM_THRESHOLD ((size_t)2U * (size_t)1024U * (size_t)1024U)
681 #else /* MORECORE_CANNOT_TRIM */
682 #define DEFAULT_TRIM_THRESHOLD MAX_SIZE_T
683 #endif /* MORECORE_CANNOT_TRIM */
684 #endif /* DEFAULT_TRIM_THRESHOLD */
685 #ifndef DEFAULT_MMAP_THRESHOLD
686 #if HAVE_MMAP
687 #define DEFAULT_MMAP_THRESHOLD ((size_t)256U * (size_t)1024U)
688 #else /* HAVE_MMAP */
689 #define DEFAULT_MMAP_THRESHOLD MAX_SIZE_T
690 #endif /* HAVE_MMAP */
691 #endif /* DEFAULT_MMAP_THRESHOLD */
692 #ifndef MAX_RELEASE_CHECK_RATE
693 #if HAVE_MMAP
694 #define MAX_RELEASE_CHECK_RATE 4095
695 #else
696 #define MAX_RELEASE_CHECK_RATE MAX_SIZE_T
697 #endif /* HAVE_MMAP */
698 #endif /* MAX_RELEASE_CHECK_RATE */
699 #ifndef USE_BUILTIN_FFS
700 #define USE_BUILTIN_FFS 0
701 #endif /* USE_BUILTIN_FFS */
702 #ifndef USE_DEV_RANDOM
703 #define USE_DEV_RANDOM 0
704 #endif /* USE_DEV_RANDOM */
705 #ifndef NO_MALLINFO
706 #define NO_MALLINFO 0
707 #endif /* NO_MALLINFO */
708 #ifndef MALLINFO_FIELD_TYPE
709 #define MALLINFO_FIELD_TYPE size_t
710 #endif /* MALLINFO_FIELD_TYPE */
711 #ifndef NO_MALLOC_STATS
712 #define NO_MALLOC_STATS 0
713 #endif /* NO_MALLOC_STATS */
714 #ifndef NO_SEGMENT_TRAVERSAL
715 #define NO_SEGMENT_TRAVERSAL 0
716 #endif /* NO_SEGMENT_TRAVERSAL */
717 
718 /*
719  mallopt tuning options. SVID/XPG defines four standard parameter
720  numbers for mallopt, normally defined in malloc.h. None of these
721  are used in this malloc, so setting them has no effect. But this
722  malloc does support the following options.
723 */
724 
725 #define M_TRIM_THRESHOLD (-1)
726 #define M_GRANULARITY (-2)
727 #define M_MMAP_THRESHOLD (-3)
728 
729 /* ------------------------ Mallinfo declarations ------------------------ */
730 
731 #if !NO_MALLINFO
732 /*
733  This version of malloc supports the standard SVID/XPG mallinfo
734  routine that returns a struct containing usage properties and
735  statistics. It should work on any system that has a
736  /usr/include/malloc.h defining struct mallinfo. The main
737  declaration needed is the mallinfo struct that is returned (by-copy)
738  by mallinfo(). The malloinfo struct contains a bunch of fields that
739  are not even meaningful in this version of malloc. These fields are
740  are instead filled by mallinfo() with other numbers that might be of
741  interest.
742 
743  HAVE_USR_INCLUDE_MALLOC_H should be set if you have a
744  /usr/include/malloc.h file that includes a declaration of struct
745  mallinfo. If so, it is included; else a compliant version is
746  declared below. These must be precisely the same for mallinfo() to
747  work. The original SVID version of this struct, defined on most
748  systems with mallinfo, declares all fields as ints. But some others
749  define as unsigned long. If your system defines the fields using a
750  type of different width than listed here, you MUST #include your
751  system version and #define HAVE_USR_INCLUDE_MALLOC_H.
752 */
753 
754 /* #define HAVE_USR_INCLUDE_MALLOC_H */
755 
756 #ifdef HAVE_USR_INCLUDE_MALLOC_H
757 #include "/usr/include/malloc.h"
758 #else /* HAVE_USR_INCLUDE_MALLOC_H */
759 #ifndef STRUCT_MALLINFO_DECLARED
760 /* HP-UX (and others?) redefines mallinfo unless _STRUCT_MALLINFO is defined */
761 #define _STRUCT_MALLINFO
762 #define STRUCT_MALLINFO_DECLARED 1
763 struct mallinfo {
764  MALLINFO_FIELD_TYPE arena; /* non-mmapped space allocated from system */
765  MALLINFO_FIELD_TYPE ordblks; /* number of free chunks */
766  MALLINFO_FIELD_TYPE smblks; /* always 0 */
767  MALLINFO_FIELD_TYPE hblks; /* always 0 */
768  MALLINFO_FIELD_TYPE hblkhd; /* space in mmapped regions */
769  MALLINFO_FIELD_TYPE usmblks; /* maximum total allocated space */
770  MALLINFO_FIELD_TYPE fsmblks; /* always 0 */
771  MALLINFO_FIELD_TYPE uordblks; /* total allocated space */
772  MALLINFO_FIELD_TYPE fordblks; /* total free space */
773  MALLINFO_FIELD_TYPE keepcost; /* releasable (via malloc_trim) space */
774 };
775 #endif /* STRUCT_MALLINFO_DECLARED */
776 #endif /* HAVE_USR_INCLUDE_MALLOC_H */
777 #endif /* NO_MALLINFO */
778 
779 /*
780  Try to persuade compilers to inline. The most critical functions for
781  inlining are defined as macros, so these aren't used for them.
782 */
783 
784 #ifndef FORCEINLINE
785  #if defined(__GNUC__)
786 #define FORCEINLINE __inline __attribute__ ((always_inline))
787  #elif defined(_MSC_VER)
788  #define FORCEINLINE __forceinline
789  #endif
790 #endif
791 #ifndef NOINLINE
792  #if defined(__GNUC__)
793  #define NOINLINE __attribute__ ((noinline))
794  #elif defined(_MSC_VER)
795  #define NOINLINE __declspec(noinline)
796  #else
797  #define NOINLINE
798  #endif
799 #endif
800 
801 #ifdef __cplusplus
802 extern "C" {
803 #ifndef FORCEINLINE
804  #define FORCEINLINE inline
805 #endif
806 #endif /* __cplusplus */
807 #ifndef FORCEINLINE
808  #define FORCEINLINE
809 #endif
810 
811 #if !ONLY_MSPACES
812 
813 /* ------------------- Declarations of public routines ------------------- */
814 
815 #ifndef USE_DL_PREFIX
816 #define dlcalloc calloc
817 #define dlfree free
818 #define dlmalloc malloc
819 #define dlmemalign memalign
820 #define dlposix_memalign posix_memalign
821 #define dlrealloc realloc
822 #define dlrealloc_in_place realloc_in_place
823 #define dlvalloc valloc
824 #define dlpvalloc pvalloc
825 #define dlmallinfo mallinfo
826 #define dlmallopt mallopt
827 #define dlmalloc_trim malloc_trim
828 #define dlmalloc_stats malloc_stats
829 #define dlmalloc_usable_size malloc_usable_size
830 #define dlmalloc_footprint malloc_footprint
831 #define dlmalloc_max_footprint malloc_max_footprint
832 #define dlmalloc_footprint_limit malloc_footprint_limit
833 #define dlmalloc_set_footprint_limit malloc_set_footprint_limit
834 #define dlmalloc_inspect_all malloc_inspect_all
835 #define dlindependent_calloc independent_calloc
836 #define dlindependent_comalloc independent_comalloc
837 #define dlbulk_free bulk_free
838 #endif /* USE_DL_PREFIX */
839 
840 /*
841  malloc(size_t n)
842  Returns a pointer to a newly allocated chunk of at least n bytes, or
843  null if no space is available, in which case errno is set to ENOMEM
844  on ANSI C systems.
845 
846  If n is zero, malloc returns a minimum-sized chunk. (The minimum
847  size is 16 bytes on most 32bit systems, and 32 bytes on 64bit
848  systems.) Note that size_t is an unsigned type, so calls with
849  arguments that would be negative if signed are interpreted as
850  requests for huge amounts of space, which will often fail. The
851  maximum supported value of n differs across systems, but is in all
852  cases less than the maximum representable value of a size_t.
853 */
854 DLMALLOC_EXPORT void* dlmalloc(size_t);
855 
856 /*
857  free(void* p)
858  Releases the chunk of memory pointed to by p, that had been previously
859  allocated using malloc or a related routine such as realloc.
860  It has no effect if p is null. If p was not malloced or already
861  freed, free(p) will by default cause the current program to abort.
862 */
863 DLMALLOC_EXPORT void dlfree(void*);
864 
865 /*
866  calloc(size_t n_elements, size_t element_size);
867  Returns a pointer to n_elements * element_size bytes, with all locations
868  set to zero.
869 */
870 DLMALLOC_EXPORT void* dlcalloc(size_t, size_t);
871 
872 /*
873  realloc(void* p, size_t n)
874  Returns a pointer to a chunk of size n that contains the same data
875  as does chunk p up to the minimum of (n, p's size) bytes, or null
876  if no space is available.
877 
878  The returned pointer may or may not be the same as p. The algorithm
879  prefers extending p in most cases when possible, otherwise it
880  employs the equivalent of a malloc-copy-free sequence.
881 
882  If p is null, realloc is equivalent to malloc.
883 
884  If space is not available, realloc returns null, errno is set (if on
885  ANSI) and p is NOT freed.
886 
887  if n is for fewer bytes than already held by p, the newly unused
888  space is lopped off and freed if possible. realloc with a size
889  argument of zero (re)allocates a minimum-sized chunk.
890 
891  The old unix realloc convention of allowing the last-free'd chunk
892  to be used as an argument to realloc is not supported.
893 */
894 DLMALLOC_EXPORT void* dlrealloc(void*, size_t);
895 
896 /*
897  realloc_in_place(void* p, size_t n)
898  Resizes the space allocated for p to size n, only if this can be
899  done without moving p (i.e., only if there is adjacent space
900  available if n is greater than p's current allocated size, or n is
901  less than or equal to p's size). This may be used instead of plain
902  realloc if an alternative allocation strategy is needed upon failure
903  to expand space; for example, reallocation of a buffer that must be
904  memory-aligned or cleared. You can use realloc_in_place to trigger
905  these alternatives only when needed.
906 
907  Returns p if successful; otherwise null.
908 */
909 DLMALLOC_EXPORT void* dlrealloc_in_place(void*, size_t);
910 
911 /*
912  memalign(size_t alignment, size_t n);
913  Returns a pointer to a newly allocated chunk of n bytes, aligned
914  in accord with the alignment argument.
915 
916  The alignment argument should be a power of two. If the argument is
917  not a power of two, the nearest greater power is used.
918  8-byte alignment is guaranteed by normal malloc calls, so don't
919  bother calling memalign with an argument of 8 or less.
920 
921  Overreliance on memalign is a sure way to fragment space.
922 */
923 DLMALLOC_EXPORT void* dlmemalign(size_t, size_t);
924 
925 /*
926  int posix_memalign(void** pp, size_t alignment, size_t n);
927  Allocates a chunk of n bytes, aligned in accord with the alignment
928  argument. Differs from memalign only in that it (1) assigns the
929  allocated memory to *pp rather than returning it, (2) fails and
930  returns EINVAL if the alignment is not a power of two (3) fails and
931  returns ENOMEM if memory cannot be allocated.
932 */
933 DLMALLOC_EXPORT int dlposix_memalign(void**, size_t, size_t);
934 
935 /*
936  valloc(size_t n);
937  Equivalent to memalign(pagesize, n), where pagesize is the page
938  size of the system. If the pagesize is unknown, 4096 is used.
939 */
940 DLMALLOC_EXPORT void* dlvalloc(size_t);
941 
942 /*
943  mallopt(int parameter_number, int parameter_value)
944  Sets tunable parameters The format is to provide a
945  (parameter-number, parameter-value) pair. mallopt then sets the
946  corresponding parameter to the argument value if it can (i.e., so
947  long as the value is meaningful), and returns 1 if successful else
948  0. To workaround the fact that mallopt is specified to use int,
949  not size_t parameters, the value -1 is specially treated as the
950  maximum unsigned size_t value.
951 
952  SVID/XPG/ANSI defines four standard param numbers for mallopt,
953  normally defined in malloc.h. None of these are use in this malloc,
954  so setting them has no effect. But this malloc also supports other
955  options in mallopt. See below for details. Briefly, supported
956  parameters are as follows (listed defaults are for "typical"
957  configurations).
958 
959  Symbol param # default allowed param values
960  M_TRIM_THRESHOLD -1 2*1024*1024 any (-1 disables)
961  M_GRANULARITY -2 page size any power of 2 >= page size
962  M_MMAP_THRESHOLD -3 256*1024 any (or 0 if no MMAP support)
963 */
964 DLMALLOC_EXPORT int dlmallopt(int, int);
965 
966 /*
967  malloc_footprint();
968  Returns the number of bytes obtained from the system. The total
969  number of bytes allocated by malloc, realloc etc., is less than this
970  value. Unlike mallinfo, this function returns only a precomputed
971  result, so can be called frequently to monitor memory consumption.
972  Even if locks are otherwise defined, this function does not use them,
973  so results might not be up to date.
974 */
976 
977 /*
978  malloc_max_footprint();
979  Returns the maximum number of bytes obtained from the system. This
980  value will be greater than current footprint if deallocated space
981  has been reclaimed by the system. The peak number of bytes allocated
982  by malloc, realloc etc., is less than this value. Unlike mallinfo,
983  this function returns only a precomputed result, so can be called
984  frequently to monitor memory consumption. Even if locks are
985  otherwise defined, this function does not use them, so results might
986  not be up to date.
987 */
989 
990 /*
991  malloc_footprint_limit();
992  Returns the number of bytes that the heap is allowed to obtain from
993  the system, returning the last value returned by
994  malloc_set_footprint_limit, or the maximum size_t value if
995  never set. The returned value reflects a permission. There is no
996  guarantee that this number of bytes can actually be obtained from
997  the system.
998 */
1000 
1001 /*
1002  malloc_set_footprint_limit();
1003  Sets the maximum number of bytes to obtain from the system, causing
1004  failure returns from malloc and related functions upon attempts to
1005  exceed this value. The argument value may be subject to page
1006  rounding to an enforceable limit; this actual value is returned.
1007  Using an argument of the maximum possible size_t effectively
1008  disables checks. If the argument is less than or equal to the
1009  current malloc_footprint, then all future allocations that require
1010  additional system memory will fail. However, invocation cannot
1011  retroactively deallocate existing used memory.
1012 */
1013 DLMALLOC_EXPORT size_t dlmalloc_set_footprint_limit(size_t bytes);
1014 
1015 #if MALLOC_INSPECT_ALL
1016 /*
1017  malloc_inspect_all(void(*handler)(void *start,
1018  void *end,
1019  size_t used_bytes,
1020  void* callback_arg),
1021  void* arg);
1022  Traverses the heap and calls the given handler for each managed
1023  region, skipping all bytes that are (or may be) used for bookkeeping
1024  purposes. Traversal does not include include chunks that have been
1025  directly memory mapped. Each reported region begins at the start
1026  address, and continues up to but not including the end address. The
1027  first used_bytes of the region contain allocated data. If
1028  used_bytes is zero, the region is unallocated. The handler is
1029  invoked with the given callback argument. If locks are defined, they
1030  are held during the entire traversal. It is a bad idea to invoke
1031  other malloc functions from within the handler.
1032 
1033  For example, to count the number of in-use chunks with size greater
1034  than 1000, you could write:
1035  static int count = 0;
1036  void count_chunks(void* start, void* end, size_t used, void* arg) {
1037  if (used >= 1000) ++count;
1038  }
1039  then:
1040  malloc_inspect_all(count_chunks, NULL);
1041 
1042  malloc_inspect_all is compiled only if MALLOC_INSPECT_ALL is defined.
1043 */
1044 DLMALLOC_EXPORT void dlmalloc_inspect_all(void(*handler)(void*, void *, size_t, void*),
1045  void* arg);
1046 
1047 #endif /* MALLOC_INSPECT_ALL */
1048 
1049 #if !NO_MALLINFO
1050 /*
1051  mallinfo()
1052  Returns (by copy) a struct containing various summary statistics:
1053 
1054  arena: current total non-mmapped bytes allocated from system
1055  ordblks: the number of free chunks
1056  smblks: always zero.
1057  hblks: current number of mmapped regions
1058  hblkhd: total bytes held in mmapped regions
1059  usmblks: the maximum total allocated space. This will be greater
1060  than current total if trimming has occurred.
1061  fsmblks: always zero
1062  uordblks: current total allocated space (normal or mmapped)
1063  fordblks: total free space
1064  keepcost: the maximum number of bytes that could ideally be released
1065  back to system via malloc_trim. ("ideally" means that
1066  it ignores page restrictions etc.)
1067 
1068  Because these fields are ints, but internal bookkeeping may
1069  be kept as longs, the reported values may wrap around zero and
1070  thus be inaccurate.
1071 */
1072 DLMALLOC_EXPORT struct mallinfo dlmallinfo(void);
1073 #endif /* NO_MALLINFO */
1074 
1075 /*
1076  independent_calloc(size_t n_elements, size_t element_size, void* chunks[]);
1077 
1078  independent_calloc is similar to calloc, but instead of returning a
1079  single cleared space, it returns an array of pointers to n_elements
1080  independent elements that can hold contents of size elem_size, each
1081  of which starts out cleared, and can be independently freed,
1082  realloc'ed etc. The elements are guaranteed to be adjacently
1083  allocated (this is not guaranteed to occur with multiple callocs or
1084  mallocs), which may also improve cache locality in some
1085  applications.
1086 
1087  The "chunks" argument is optional (i.e., may be null, which is
1088  probably the most typical usage). If it is null, the returned array
1089  is itself dynamically allocated and should also be freed when it is
1090  no longer needed. Otherwise, the chunks array must be of at least
1091  n_elements in length. It is filled in with the pointers to the
1092  chunks.
1093 
1094  In either case, independent_calloc returns this pointer array, or
1095  null if the allocation failed. If n_elements is zero and "chunks"
1096  is null, it returns a chunk representing an array with zero elements
1097  (which should be freed if not wanted).
1098 
1099  Each element must be freed when it is no longer needed. This can be
1100  done all at once using bulk_free.
1101 
1102  independent_calloc simplifies and speeds up implementations of many
1103  kinds of pools. It may also be useful when constructing large data
1104  structures that initially have a fixed number of fixed-sized nodes,
1105  but the number is not known at compile time, and some of the nodes
1106  may later need to be freed. For example:
1107 
1108  struct Node { int item; struct Node* next; };
1109 
1110  struct Node* build_list() {
1111  struct Node** pool;
1112  int n = read_number_of_nodes_needed();
1113  if (n <= 0) return 0;
1114  pool = (struct Node**)(independent_calloc(n, sizeof(struct Node), 0);
1115  if (pool == 0) die();
1116  // organize into a linked list...
1117  struct Node* first = pool[0];
1118  for (i = 0; i < n-1; ++i)
1119  pool[i]->next = pool[i+1];
1120  free(pool); // Can now free the array (or not, if it is needed later)
1121  return first;
1122  }
1123 */
1124 DLMALLOC_EXPORT void** dlindependent_calloc(size_t, size_t, void**);
1125 
1126 /*
1127  independent_comalloc(size_t n_elements, size_t sizes[], void* chunks[]);
1128 
1129  independent_comalloc allocates, all at once, a set of n_elements
1130  chunks with sizes indicated in the "sizes" array. It returns
1131  an array of pointers to these elements, each of which can be
1132  independently freed, realloc'ed etc. The elements are guaranteed to
1133  be adjacently allocated (this is not guaranteed to occur with
1134  multiple callocs or mallocs), which may also improve cache locality
1135  in some applications.
1136 
1137  The "chunks" argument is optional (i.e., may be null). If it is null
1138  the returned array is itself dynamically allocated and should also
1139  be freed when it is no longer needed. Otherwise, the chunks array
1140  must be of at least n_elements in length. It is filled in with the
1141  pointers to the chunks.
1142 
1143  In either case, independent_comalloc returns this pointer array, or
1144  null if the allocation failed. If n_elements is zero and chunks is
1145  null, it returns a chunk representing an array with zero elements
1146  (which should be freed if not wanted).
1147 
1148  Each element must be freed when it is no longer needed. This can be
1149  done all at once using bulk_free.
1150 
1151  independent_comallac differs from independent_calloc in that each
1152  element may have a different size, and also that it does not
1153  automatically clear elements.
1154 
1155  independent_comalloc can be used to speed up allocation in cases
1156  where several structs or objects must always be allocated at the
1157  same time. For example:
1158 
1159  struct Head { ... }
1160  struct Foot { ... }
1161 
1162  void send_message(char* msg) {
1163  int msglen = strlen(msg);
1164  size_t sizes[3] = { sizeof(struct Head), msglen, sizeof(struct Foot) };
1165  void* chunks[3];
1166  if (independent_comalloc(3, sizes, chunks) == 0)
1167  die();
1168  struct Head* head = (struct Head*)(chunks[0]);
1169  char* body = (char*)(chunks[1]);
1170  struct Foot* foot = (struct Foot*)(chunks[2]);
1171  // ...
1172  }
1173 
1174  In general though, independent_comalloc is worth using only for
1175  larger values of n_elements. For small values, you probably won't
1176  detect enough difference from series of malloc calls to bother.
1177 
1178  Overuse of independent_comalloc can increase overall memory usage,
1179  since it cannot reuse existing noncontiguous small chunks that
1180  might be available for some of the elements.
1181 */
1182 DLMALLOC_EXPORT void** dlindependent_comalloc(size_t, size_t*, void**);
1183 
1184 /*
1185  bulk_free(void* array[], size_t n_elements)
1186  Frees and clears (sets to null) each non-null pointer in the given
1187  array. This is likely to be faster than freeing them one-by-one.
1188  If footers are used, pointers that have been allocated in different
1189  mspaces are not freed or cleared, and the count of all such pointers
1190  is returned. For large arrays of pointers with poor locality, it
1191  may be worthwhile to sort this array before calling bulk_free.
1192 */
1193 DLMALLOC_EXPORT size_t dlbulk_free(void**, size_t n_elements);
1194 
1195 /*
1196  pvalloc(size_t n);
1197  Equivalent to valloc(minimum-page-that-holds(n)), that is,
1198  round up n to nearest pagesize.
1199  */
1200 DLMALLOC_EXPORT void* dlpvalloc(size_t);
1201 
1202 /*
1203  malloc_trim(size_t pad);
1204 
1205  If possible, gives memory back to the system (via negative arguments
1206  to sbrk) if there is unused memory at the `high' end of the malloc
1207  pool or in unused MMAP segments. You can call this after freeing
1208  large blocks of memory to potentially reduce the system-level memory
1209  requirements of a program. However, it cannot guarantee to reduce
1210  memory. Under some allocation patterns, some large free blocks of
1211  memory will be locked between two used chunks, so they cannot be
1212  given back to the system.
1213 
1214  The `pad' argument to malloc_trim represents the amount of free
1215  trailing space to leave untrimmed. If this argument is zero, only
1216  the minimum amount of memory to maintain internal data structures
1217  will be left. Non-zero arguments can be supplied to maintain enough
1218  trailing space to service future expected allocations without having
1219  to re-obtain memory from the system.
1220 
1221  Malloc_trim returns 1 if it actually released any memory, else 0.
1222 */
1223 DLMALLOC_EXPORT int dlmalloc_trim(size_t);
1224 
1225 /*
1226  malloc_stats();
1227  Prints on stderr the amount of space obtained from the system (both
1228  via sbrk and mmap), the maximum amount (which may be more than
1229  current if malloc_trim and/or munmap got called), and the current
1230  number of bytes allocated via malloc (or realloc, etc) but not yet
1231  freed. Note that this is the number of bytes allocated, not the
1232  number requested. It will be larger than the number requested
1233  because of alignment and bookkeeping overhead. Because it includes
1234  alignment wastage as being in use, this figure may be greater than
1235  zero even when no user-level chunks are allocated.
1236 
1237  The reported current and maximum system memory can be inaccurate if
1238  a program makes other calls to system memory allocation functions
1239  (normally sbrk) outside of malloc.
1240 
1241  malloc_stats prints only the most commonly interesting statistics.
1242  More information can be obtained by calling mallinfo.
1243 */
1244 DLMALLOC_EXPORT void dlmalloc_stats(void);
1245 
1246 /*
1247  malloc_usable_size(void* p);
1248 
1249  Returns the number of bytes you can actually use in
1250  an allocated chunk, which may be more than you requested (although
1251  often not) due to alignment and minimum size constraints.
1252  You can use this many bytes without worrying about
1253  overwriting other allocated objects. This is not a particularly great
1254  programming practice. malloc_usable_size can be more useful in
1255  debugging and assertions, for example:
1256 
1257  p = malloc(n);
1258  assert(malloc_usable_size(p) >= 256);
1259 */
1260 size_t dlmalloc_usable_size(void*);
1261 
1262 #endif /* ONLY_MSPACES */
1263 
1264 #if MSPACES
1265 
1266 /*
1267  mspace is an opaque type representing an independent
1268  region of space that supports mspace_malloc, etc.
1269 */
1270 typedef void* mspace;
1271 
1272 /*
1273  create_mspace creates and returns a new independent space with the
1274  given initial capacity, or, if 0, the default granularity size. It
1275  returns null if there is no system memory available to create the
1276  space. If argument locked is non-zero, the space uses a separate
1277  lock to control access. The capacity of the space will grow
1278  dynamically as needed to service mspace_malloc requests. You can
1279  control the sizes of incremental increases of this space by
1280  compiling with a different DEFAULT_GRANULARITY or dynamically
1281  setting with mallopt(M_GRANULARITY, value).
1282 */
1283 DLMALLOC_EXPORT mspace create_mspace(size_t capacity, int locked);
1284 
1285 /*
1286  destroy_mspace destroys the given space, and attempts to return all
1287  of its memory back to the system, returning the total number of
1288  bytes freed. After destruction, the results of access to all memory
1289  used by the space become undefined.
1290 */
1291 DLMALLOC_EXPORT size_t destroy_mspace(mspace msp);
1292 
1293 /*
1294  create_mspace_with_base uses the memory supplied as the initial base
1295  of a new mspace. Part (less than 128*sizeof(size_t) bytes) of this
1296  space is used for bookkeeping, so the capacity must be at least this
1297  large. (Otherwise 0 is returned.) When this initial space is
1298  exhausted, additional memory will be obtained from the system.
1299  Destroying this space will deallocate all additionally allocated
1300  space (if possible) but not the initial base.
1301 */
1302 DLMALLOC_EXPORT mspace create_mspace_with_base(void* base, size_t capacity, int locked);
1303 
1304 /*
1305  mspace_track_large_chunks controls whether requests for large chunks
1306  are allocated in their own untracked mmapped regions, separate from
1307  others in this mspace. By default large chunks are not tracked,
1308  which reduces fragmentation. However, such chunks are not
1309  necessarily released to the system upon destroy_mspace. Enabling
1310  tracking by setting to true may increase fragmentation, but avoids
1311  leakage when relying on destroy_mspace to release all memory
1312  allocated using this space. The function returns the previous
1313  setting.
1314 */
1315 DLMALLOC_EXPORT int mspace_track_large_chunks(mspace msp, int enable);
1316 
1317 
1318 /*
1319  mspace_malloc behaves as malloc, but operates within
1320  the given space.
1321 */
1322 DLMALLOC_EXPORT void* mspace_malloc(mspace msp, size_t bytes);
1323 
1324 /*
1325  mspace_free behaves as free, but operates within
1326  the given space.
1327 
1328  If compiled with FOOTERS==1, mspace_free is not actually needed.
1329  free may be called instead of mspace_free because freed chunks from
1330  any space are handled by their originating spaces.
1331 */
1332 DLMALLOC_EXPORT void mspace_free(mspace msp, void* mem);
1333 
1334 /*
1335  mspace_realloc behaves as realloc, but operates within
1336  the given space.
1337 
1338  If compiled with FOOTERS==1, mspace_realloc is not actually
1339  needed. realloc may be called instead of mspace_realloc because
1340  realloced chunks from any space are handled by their originating
1341  spaces.
1342 */
1343 DLMALLOC_EXPORT void* mspace_realloc(mspace msp, void* mem, size_t newsize);
1344 
1345 /*
1346  mspace_calloc behaves as calloc, but operates within
1347  the given space.
1348 */
1349 DLMALLOC_EXPORT void* mspace_calloc(mspace msp, size_t n_elements, size_t elem_size);
1350 
1351 /*
1352  mspace_memalign behaves as memalign, but operates within
1353  the given space.
1354 */
1355 DLMALLOC_EXPORT void* mspace_memalign(mspace msp, size_t alignment, size_t bytes);
1356 
1357 
1358 /*
1359  XXX: johnsond added this; same as dlposix_memalign, but with mspaces
1360 */
1361 DLMALLOC_EXPORT int mspace_posix_memalign(mspace msp, void** pp, size_t alignment, size_t bytes);
1362 
1363 /*
1364  mspace_independent_calloc behaves as independent_calloc, but
1365  operates within the given space.
1366 */
1367 DLMALLOC_EXPORT void** mspace_independent_calloc(mspace msp, size_t n_elements,
1368  size_t elem_size, void* chunks[]);
1369 
1370 /*
1371  mspace_independent_comalloc behaves as independent_comalloc, but
1372  operates within the given space.
1373 */
1374 DLMALLOC_EXPORT void** mspace_independent_comalloc(mspace msp, size_t n_elements,
1375  size_t sizes[], void* chunks[]);
1376 
1377 /*
1378  mspace_footprint() returns the number of bytes obtained from the
1379  system for this space.
1380 */
1381 DLMALLOC_EXPORT size_t mspace_footprint(mspace msp);
1382 
1383 /*
1384  mspace_max_footprint() returns the peak number of bytes obtained from the
1385  system for this space.
1386 */
1387 DLMALLOC_EXPORT size_t mspace_max_footprint(mspace msp);
1388 
1389 
1390 #if !NO_MALLINFO
1391 /*
1392  mspace_mallinfo behaves as mallinfo, but reports properties of
1393  the given space.
1394 */
1395 DLMALLOC_EXPORT struct mallinfo mspace_mallinfo(mspace msp);
1396 #endif /* NO_MALLINFO */
1397 
1398 /*
1399  malloc_usable_size(void* p) behaves the same as malloc_usable_size;
1400 */
1401 DLMALLOC_EXPORT size_t mspace_usable_size(const void* mem);
1402 
1403 /*
1404  mspace_malloc_stats behaves as malloc_stats, but reports
1405  properties of the given space.
1406 */
1407 DLMALLOC_EXPORT void mspace_malloc_stats(mspace msp);
1408 
1409 /*
1410  mspace_trim behaves as malloc_trim, but
1411  operates within the given space.
1412 */
1413 DLMALLOC_EXPORT int mspace_trim(mspace msp, size_t pad);
1414 
1415 /*
1416  An alias for mallopt.
1417 */
1418 DLMALLOC_EXPORT int mspace_mallopt(int, int);
1419 
1420 #endif /* MSPACES */
1421 
1422 #ifdef __cplusplus
1423 } /* end of extern "C" */
1424 #endif /* __cplusplus */
1425 
1426 /*
1427  ========================================================================
1428  To make a fully customizable malloc.h header file, cut everything
1429  above this line, put into file malloc.h, edit to suit, and #include it
1430  on the next line, as well as in programs that use this malloc.
1431  ========================================================================
1432 */
1433 
1434 /* #include "malloc.h" */
1435 
1436 /*------------------------------ internal #includes ---------------------- */
1437 
1438 #ifdef _MSC_VER
1439 #pragma warning( disable : 4146 ) /* no "unsigned" warnings */
1440 #endif /* _MSC_VER */
1441 #if !NO_MALLOC_STATS
1442 #include <stdio.h> /* for printing in malloc_stats */
1443 #endif /* NO_MALLOC_STATS */
1444 #ifndef LACKS_ERRNO_H
1445 #include <errno.h> /* for MALLOC_FAILURE_ACTION */
1446 #endif /* LACKS_ERRNO_H */
1447 #ifdef DEBUG
1448 #if ABORT_ON_ASSERT_FAILURE
1449 #undef assert
1450 #define assert(x) if(!(x)) ABORT
1451 #else /* ABORT_ON_ASSERT_FAILURE */
1452 #include <assert.h>
1453 #endif /* ABORT_ON_ASSERT_FAILURE */
1454 #else /* DEBUG */
1455 #ifndef assert
1456 #define assert(x)
1457 #endif
1458 #define DEBUG 0
1459 #endif /* DEBUG */
1460 #if !defined(WIN32) && !defined(LACKS_TIME_H)
1461 #include <time.h> /* for magic initialization */
1462 #endif /* WIN32 */
1463 #ifndef LACKS_STDLIB_H
1464 #include <stdlib.h> /* for abort() */
1465 #endif /* LACKS_STDLIB_H */
1466 #ifndef LACKS_STRING_H
1467 #include <string.h> /* for memset etc */
1468 #endif /* LACKS_STRING_H */
1469 #if USE_BUILTIN_FFS
1470 #ifndef LACKS_STRINGS_H
1471 #include <strings.h> /* for ffs */
1472 #endif /* LACKS_STRINGS_H */
1473 #endif /* USE_BUILTIN_FFS */
1474 #if HAVE_MMAP
1475 #ifndef LACKS_SYS_MMAN_H
1476 /* On some versions of linux, mremap decl in mman.h needs __USE_GNU set */
1477 #if (defined(linux) && !defined(__USE_GNU))
1478 #define __USE_GNU 1
1479 #include <sys/mman.h> /* for mmap */
1480 #undef __USE_GNU
1481 #else
1482 #include <sys/mman.h> /* for mmap */
1483 #endif /* linux */
1484 #endif /* LACKS_SYS_MMAN_H */
1485 #ifndef LACKS_FCNTL_H
1486 #include <fcntl.h>
1487 #endif /* LACKS_FCNTL_H */
1488 #endif /* HAVE_MMAP */
1489 #ifndef LACKS_UNISTD_H
1490 #include <unistd.h> /* for sbrk, sysconf */
1491 #else /* LACKS_UNISTD_H */
1492 #if !defined(__FreeBSD__) && !defined(__OpenBSD__) && !defined(__NetBSD__)
1493 extern void* sbrk(ptrdiff_t);
1494 #endif /* FreeBSD etc */
1495 #endif /* LACKS_UNISTD_H */
1496 
1497 /* Declarations for locking */
1498 #if USE_LOCKS
1499 #ifndef WIN32
1500 #if defined (__SVR4) && defined (__sun) /* solaris */
1501 #include <thread.h>
1502 #elif !defined(LACKS_SCHED_H)
1503 #include <sched.h>
1504 #endif /* solaris or LACKS_SCHED_H */
1505 #if (defined(USE_RECURSIVE_LOCKS) && USE_RECURSIVE_LOCKS != 0) || !USE_SPIN_LOCKS
1506 #include <pthread.h>
1507 #endif /* USE_RECURSIVE_LOCKS ... */
1508 #elif defined(_MSC_VER)
1509 #ifndef _M_AMD64
1510 /* These are already defined on AMD64 builds */
1511 #ifdef __cplusplus
1512 extern "C" {
1513 #endif /* __cplusplus */
1514 LONG __cdecl _InterlockedCompareExchange(LONG volatile *Dest, LONG Exchange, LONG Comp);
1515 LONG __cdecl _InterlockedExchange(LONG volatile *Target, LONG Value);
1516 #ifdef __cplusplus
1517 }
1518 #endif /* __cplusplus */
1519 #endif /* _M_AMD64 */
1520 #pragma intrinsic (_InterlockedCompareExchange)
1521 #pragma intrinsic (_InterlockedExchange)
1522 #define interlockedcompareexchange _InterlockedCompareExchange
1523 #define interlockedexchange _InterlockedExchange
1524 #elif defined(WIN32) && defined(__GNUC__)
1525 #define interlockedcompareexchange(a, b, c) __sync_val_compare_and_swap(a, c, b)
1526 #define interlockedexchange __sync_lock_test_and_set
1527 #endif /* Win32 */
1528 #else /* USE_LOCKS */
1529 #endif /* USE_LOCKS */
1530 
1531 #ifndef LOCK_AT_FORK
1532 #define LOCK_AT_FORK 0
1533 #endif
1534 
1535 /* Declarations for bit scanning on win32 */
1536 #if defined(_MSC_VER) && _MSC_VER>=1300
1537 #ifndef BitScanForward /* Try to avoid pulling in WinNT.h */
1538 #ifdef __cplusplus
1539 extern "C" {
1540 #endif /* __cplusplus */
1541 unsigned char _BitScanForward(unsigned long *index, unsigned long mask);
1542 unsigned char _BitScanReverse(unsigned long *index, unsigned long mask);
1543 #ifdef __cplusplus
1544 }
1545 #endif /* __cplusplus */
1546 
1547 #define BitScanForward _BitScanForward
1548 #define BitScanReverse _BitScanReverse
1549 #pragma intrinsic(_BitScanForward)
1550 #pragma intrinsic(_BitScanReverse)
1551 #endif /* BitScanForward */
1552 #endif /* defined(_MSC_VER) && _MSC_VER>=1300 */
1553 
1554 #ifndef WIN32
1555 #ifndef malloc_getpagesize
1556 # ifdef _SC_PAGESIZE /* some SVR4 systems omit an underscore */
1557 # ifndef _SC_PAGE_SIZE
1558 # define _SC_PAGE_SIZE _SC_PAGESIZE
1559 # endif
1560 # endif
1561 # ifdef _SC_PAGE_SIZE
1562 # define malloc_getpagesize sysconf(_SC_PAGE_SIZE)
1563 # else
1564 # if defined(BSD) || defined(DGUX) || defined(HAVE_GETPAGESIZE)
1565  extern size_t getpagesize();
1566 # define malloc_getpagesize getpagesize()
1567 # else
1568 # ifdef WIN32 /* use supplied emulation of getpagesize */
1569 # define malloc_getpagesize getpagesize()
1570 # else
1571 # ifndef LACKS_SYS_PARAM_H
1572 # include <sys/param.h>
1573 # endif
1574 # ifdef EXEC_PAGESIZE
1575 # define malloc_getpagesize EXEC_PAGESIZE
1576 # else
1577 # ifdef NBPG
1578 # ifndef CLSIZE
1579 # define malloc_getpagesize NBPG
1580 # else
1581 # define malloc_getpagesize (NBPG * CLSIZE)
1582 # endif
1583 # else
1584 # ifdef NBPC
1585 # define malloc_getpagesize NBPC
1586 # else
1587 # ifdef PAGESIZE
1588 # define malloc_getpagesize PAGESIZE
1589 # else /* just guess */
1590 # define malloc_getpagesize ((size_t)4096U)
1591 # endif
1592 # endif
1593 # endif
1594 # endif
1595 # endif
1596 # endif
1597 # endif
1598 #endif
1599 #endif
1600 
1601 /* ------------------- size_t and alignment properties -------------------- */
1602 
1603 /* The byte and bit size of a size_t */
1604 #define SIZE_T_SIZE (sizeof(size_t))
1605 #define SIZE_T_BITSIZE (sizeof(size_t) << 3)
1606 
1607 /* Some constants coerced to size_t */
1608 /* Annoying but necessary to avoid errors on some platforms */
1609 #define SIZE_T_ZERO ((size_t)0)
1610 #define SIZE_T_ONE ((size_t)1)
1611 #define SIZE_T_TWO ((size_t)2)
1612 #define SIZE_T_FOUR ((size_t)4)
1613 #define TWO_SIZE_T_SIZES (SIZE_T_SIZE<<1)
1614 #define FOUR_SIZE_T_SIZES (SIZE_T_SIZE<<2)
1615 #define SIX_SIZE_T_SIZES (FOUR_SIZE_T_SIZES+TWO_SIZE_T_SIZES)
1616 #define HALF_MAX_SIZE_T (MAX_SIZE_T / 2U)
1617 
1618 /* The bit mask value corresponding to MALLOC_ALIGNMENT */
1619 #define CHUNK_ALIGN_MASK (MALLOC_ALIGNMENT - SIZE_T_ONE)
1620 
1621 /* True if address a has acceptable alignment */
1622 #define is_aligned(A) (((size_t)((A)) & (CHUNK_ALIGN_MASK)) == 0)
1623 
1624 /* the number of bytes to offset an address to align it */
1625 #define align_offset(A)\
1626  ((((size_t)(A) & CHUNK_ALIGN_MASK) == 0)? 0 :\
1627  ((MALLOC_ALIGNMENT - ((size_t)(A) & CHUNK_ALIGN_MASK)) & CHUNK_ALIGN_MASK))
1628 
1629 /* -------------------------- MMAP preliminaries ------------------------- */
1630 
1631 /*
1632  If HAVE_MORECORE or HAVE_MMAP are false, we just define calls and
1633  checks to fail so compiler optimizer can delete code rather than
1634  using so many "#if"s.
1635 */
1636 
1637 
1638 /* MORECORE and MMAP must return MFAIL on failure */
1639 #define MFAIL ((void*)(MAX_SIZE_T))
1640 #define CMFAIL ((char*)(MFAIL)) /* defined for convenience */
1641 
1642 #if HAVE_MMAP
1643 
1644 #ifndef WIN32
1645 #define MUNMAP_DEFAULT(a, s) munmap((a), (s))
1646 #define MMAP_PROT (PROT_READ|PROT_WRITE)
1647 #if !defined(MAP_ANONYMOUS) && defined(MAP_ANON)
1648 #define MAP_ANONYMOUS MAP_ANON
1649 #endif /* MAP_ANON */
1650 #ifdef MAP_ANONYMOUS
1651 #define MMAP_FLAGS (MAP_PRIVATE|MAP_ANONYMOUS)
1652 #define MMAP_DEFAULT(s) mmap(0, (s), MMAP_PROT, MMAP_FLAGS, -1, 0)
1653 #else /* MAP_ANONYMOUS */
1654 /*
1655  Nearly all versions of mmap support MAP_ANONYMOUS, so the following
1656  is unlikely to be needed, but is supplied just in case.
1657 */
1658 #define MMAP_FLAGS (MAP_PRIVATE)
1659 static int dev_zero_fd = -1; /* Cached file descriptor for /dev/zero. */
1660 #define MMAP_DEFAULT(s) ((dev_zero_fd < 0) ? \
1661  (dev_zero_fd = open("/dev/zero", O_RDWR), \
1662  mmap(0, (s), MMAP_PROT, MMAP_FLAGS, dev_zero_fd, 0)) : \
1663  mmap(0, (s), MMAP_PROT, MMAP_FLAGS, dev_zero_fd, 0))
1664 #endif /* MAP_ANONYMOUS */
1665 
1666 #define DIRECT_MMAP_DEFAULT(s) MMAP_DEFAULT(s)
1667 
1668 #else /* WIN32 */
1669 
1670 /* Win32 MMAP via VirtualAlloc */
1671 static FORCEINLINE void* win32mmap(size_t size) {
1672  void* ptr = VirtualAlloc(0, size, MEM_RESERVE|MEM_COMMIT, PAGE_READWRITE);
1673  return (ptr != 0)? ptr: MFAIL;
1674 }
1675 
1676 /* For direct MMAP, use MEM_TOP_DOWN to minimize interference */
1677 static FORCEINLINE void* win32direct_mmap(size_t size) {
1678  void* ptr = VirtualAlloc(0, size, MEM_RESERVE|MEM_COMMIT|MEM_TOP_DOWN,
1679  PAGE_READWRITE);
1680  return (ptr != 0)? ptr: MFAIL;
1681 }
1682 
1683 /* This function supports releasing coalesed segments */
1684 static FORCEINLINE int win32munmap(void* ptr, size_t size) {
1685  MEMORY_BASIC_INFORMATION minfo;
1686  char* cptr = (char*)ptr;
1687  while (size) {
1688  if (VirtualQuery(cptr, &minfo, sizeof(minfo)) == 0)
1689  return -1;
1690  if (minfo.BaseAddress != cptr || minfo.AllocationBase != cptr ||
1691  minfo.State != MEM_COMMIT || minfo.RegionSize > size)
1692  return -1;
1693  if (VirtualFree(cptr, 0, MEM_RELEASE) == 0)
1694  return -1;
1695  cptr += minfo.RegionSize;
1696  size -= minfo.RegionSize;
1697  }
1698  return 0;
1699 }
1700 
1701 #define MMAP_DEFAULT(s) win32mmap(s)
1702 #define MUNMAP_DEFAULT(a, s) win32munmap((a), (s))
1703 #define DIRECT_MMAP_DEFAULT(s) win32direct_mmap(s)
1704 #endif /* WIN32 */
1705 #endif /* HAVE_MMAP */
1706 
1707 #if HAVE_MREMAP
1708 #ifndef WIN32
1709 #define MREMAP_DEFAULT(addr, osz, nsz, mv) mremap((addr), (osz), (nsz), (mv))
1710 #endif /* WIN32 */
1711 #endif /* HAVE_MREMAP */
1712 
1716 #if HAVE_MORECORE
1717 void * MORECORE (int size);
1718  #ifdef MORECORE
1719  #define CALL_MORECORE(S) MORECORE(S)
1720  #else /* MORECORE */
1721  #define CALL_MORECORE(S) MORECORE_DEFAULT(S)
1722  #endif /* MORECORE */
1723 #else /* HAVE_MORECORE */
1724  #define CALL_MORECORE(S) MFAIL
1725 #endif /* HAVE_MORECORE */
1726 
1730 #if HAVE_MMAP
1731  #define USE_MMAP_BIT (SIZE_T_ONE)
1732 
1733  #ifdef MMAP
1734  #define CALL_MMAP(s) MMAP(s)
1735  #else /* MMAP */
1736  #define CALL_MMAP(s) MMAP_DEFAULT(s)
1737  #endif /* MMAP */
1738  #ifdef MUNMAP
1739  #define CALL_MUNMAP(a, s) MUNMAP((a), (s))
1740  #else /* MUNMAP */
1741  #define CALL_MUNMAP(a, s) MUNMAP_DEFAULT((a), (s))
1742  #endif /* MUNMAP */
1743  #ifdef DIRECT_MMAP
1744  #define CALL_DIRECT_MMAP(s) DIRECT_MMAP(s)
1745  #else /* DIRECT_MMAP */
1746  #define CALL_DIRECT_MMAP(s) DIRECT_MMAP_DEFAULT(s)
1747  #endif /* DIRECT_MMAP */
1748 #else /* HAVE_MMAP */
1749  #define USE_MMAP_BIT (SIZE_T_ZERO)
1750 
1751  #define MMAP(s) MFAIL
1752  #define MUNMAP(a, s) (-1)
1753  #define DIRECT_MMAP(s) MFAIL
1754  #define CALL_DIRECT_MMAP(s) DIRECT_MMAP(s)
1755  #define CALL_MMAP(s) MMAP(s)
1756  #define CALL_MUNMAP(a, s) MUNMAP((a), (s))
1757 #endif /* HAVE_MMAP */
1758 
1762 #if HAVE_MMAP && HAVE_MREMAP
1763  #ifdef MREMAP
1764  #define CALL_MREMAP(addr, osz, nsz, mv) MREMAP((addr), (osz), (nsz), (mv))
1765  #else /* MREMAP */
1766  #define CALL_MREMAP(addr, osz, nsz, mv) MREMAP_DEFAULT((addr), (osz), (nsz), (mv))
1767  #endif /* MREMAP */
1768 #else /* HAVE_MMAP && HAVE_MREMAP */
1769  #define CALL_MREMAP(addr, osz, nsz, mv) MFAIL
1770 #endif /* HAVE_MMAP && HAVE_MREMAP */
1771 
1772 /* mstate bit set if continguous morecore disabled or failed */
1773 #define USE_NONCONTIGUOUS_BIT (4U)
1774 
1775 /* segment bit set in create_mspace_with_base */
1776 #define EXTERN_BIT (8U)
1777 
1778 
1779 /* --------------------------- Lock preliminaries ------------------------ */
1780 
1781 /*
1782  When locks are defined, there is one global lock, plus
1783  one per-mspace lock.
1784 
1785  The global lock_ensures that mparams.magic and other unique
1786  mparams values are initialized only once. It also protects
1787  sequences of calls to MORECORE. In many cases sys_alloc requires
1788  two calls, that should not be interleaved with calls by other
1789  threads. This does not protect against direct calls to MORECORE
1790  by other threads not using this lock, so there is still code to
1791  cope the best we can on interference.
1792 
1793  Per-mspace locks surround calls to malloc, free, etc.
1794  By default, locks are simple non-reentrant mutexes.
1795 
1796  Because lock-protected regions generally have bounded times, it is
1797  OK to use the supplied simple spinlocks. Spinlocks are likely to
1798  improve performance for lightly contended applications, but worsen
1799  performance under heavy contention.
1800 
1801  If USE_LOCKS is > 1, the definitions of lock routines here are
1802  bypassed, in which case you will need to define the type MLOCK_T,
1803  and at least INITIAL_LOCK, DESTROY_LOCK, ACQUIRE_LOCK, RELEASE_LOCK
1804  and TRY_LOCK. You must also declare a
1805  static MLOCK_T malloc_global_mutex = { initialization values };.
1806 
1807 */
1808 
1809 #if !USE_LOCKS
1810 #define USE_LOCK_BIT (0U)
1811 #define INITIAL_LOCK(l) (0)
1812 #define DESTROY_LOCK(l) (0)
1813 #define ACQUIRE_MALLOC_GLOBAL_LOCK()
1814 #define RELEASE_MALLOC_GLOBAL_LOCK()
1815 
1816 #else
1817 #if USE_LOCKS > 1
1818 /* ----------------------- User-defined locks ------------------------ */
1819 /* Define your own lock implementation here */
1820 /* #define INITIAL_LOCK(lk) ... */
1821 /* #define DESTROY_LOCK(lk) ... */
1822 /* #define ACQUIRE_LOCK(lk) ... */
1823 /* #define RELEASE_LOCK(lk) ... */
1824 /* #define TRY_LOCK(lk) ... */
1825 /* static MLOCK_T malloc_global_mutex = ... */
1826 
1827 #elif USE_SPIN_LOCKS
1828 
1829 /* First, define CAS_LOCK and CLEAR_LOCK on ints */
1830 /* Note CAS_LOCK defined to return 0 on success */
1831 
1832 #if defined(__GNUC__)&& (__GNUC__ > 4 || (__GNUC__ == 4 && __GNUC_MINOR__ >= 1))
1833 #define CAS_LOCK(sl) __sync_lock_test_and_set(sl, 1)
1834 #define CLEAR_LOCK(sl) __sync_lock_release(sl)
1835 
1836 #elif (defined(__GNUC__) && (defined(__i386__) || defined(__x86_64__)))
1837 /* Custom spin locks for older gcc on x86 */
1838 static FORCEINLINE int x86_cas_lock(int *sl) {
1839  int ret;
1840  int val = 1;
1841  int cmp = 0;
1842  __asm__ __volatile__ ("lock; cmpxchgl %1, %2"
1843  : "=a" (ret)
1844  : "r" (val), "m" (*(sl)), "0"(cmp)
1845  : "memory", "cc");
1846  return ret;
1847 }
1848 
1849 static FORCEINLINE void x86_clear_lock(int* sl) {
1850  assert(*sl != 0);
1851  int prev = 0;
1852  int ret;
1853  __asm__ __volatile__ ("lock; xchgl %0, %1"
1854  : "=r" (ret)
1855  : "m" (*(sl)), "0"(prev)
1856  : "memory");
1857 }
1858 
1859 #define CAS_LOCK(sl) x86_cas_lock(sl)
1860 #define CLEAR_LOCK(sl) x86_clear_lock(sl)
1861 
1862 #else /* Win32 MSC */
1863 #define CAS_LOCK(sl) interlockedexchange(sl, (LONG)1)
1864 #define CLEAR_LOCK(sl) interlockedexchange (sl, (LONG)0)
1865 
1866 #endif /* ... gcc spins locks ... */
1867 
1868 /* How to yield for a spin lock */
1869 #define SPINS_PER_YIELD 63
1870 #if defined(_MSC_VER)
1871 #define SLEEP_EX_DURATION 50 /* delay for yield/sleep */
1872 #define SPIN_LOCK_YIELD SleepEx(SLEEP_EX_DURATION, FALSE)
1873 #elif defined (__SVR4) && defined (__sun) /* solaris */
1874 #define SPIN_LOCK_YIELD thr_yield();
1875 #elif !defined(LACKS_SCHED_H)
1876 #define SPIN_LOCK_YIELD sched_yield();
1877 #else
1878 #define SPIN_LOCK_YIELD
1879 #endif /* ... yield ... */
1880 
1881 #if !defined(USE_RECURSIVE_LOCKS) || USE_RECURSIVE_LOCKS == 0
1882 /* Plain spin locks use single word (embedded in malloc_states) */
1883 static int spin_acquire_lock(int *sl) {
1884  int spins = 0;
1885  while (*(volatile int *)sl != 0 || CAS_LOCK(sl)) {
1886  if ((++spins & SPINS_PER_YIELD) == 0) {
1887  SPIN_LOCK_YIELD;
1888  }
1889  }
1890  return 0;
1891 }
1892 
1893 #define MLOCK_T int
1894 #define TRY_LOCK(sl) !CAS_LOCK(sl)
1895 #define RELEASE_LOCK(sl) CLEAR_LOCK(sl)
1896 #define ACQUIRE_LOCK(sl) (CAS_LOCK(sl)? spin_acquire_lock(sl) : 0)
1897 #define INITIAL_LOCK(sl) (*sl = 0)
1898 #define DESTROY_LOCK(sl) (0)
1899 static MLOCK_T malloc_global_mutex = 0;
1900 
1901 #else /* USE_RECURSIVE_LOCKS */
1902 /* types for lock owners */
1903 #ifdef WIN32
1904 #define THREAD_ID_T DWORD
1905 #define CURRENT_THREAD GetCurrentThreadId()
1906 #define EQ_OWNER(X,Y) ((X) == (Y))
1907 #else
1908 /*
1909  Note: the following assume that pthread_t is a type that can be
1910  initialized to (casted) zero. If this is not the case, you will need to
1911  somehow redefine these or not use spin locks.
1912 */
1913 #define THREAD_ID_T pthread_t
1914 #define CURRENT_THREAD pthread_self()
1915 #define EQ_OWNER(X,Y) pthread_equal(X, Y)
1916 #endif
1917 
1918 struct malloc_recursive_lock {
1919  int sl;
1920  unsigned int c;
1921  THREAD_ID_T threadid;
1922 };
1923 
1924 #define MLOCK_T struct malloc_recursive_lock
1925 static MLOCK_T malloc_global_mutex = { 0, 0, (THREAD_ID_T)0};
1926 
1927 static FORCEINLINE void recursive_release_lock(MLOCK_T *lk) {
1928  assert(lk->sl != 0);
1929  if (--lk->c == 0) {
1930  CLEAR_LOCK(&lk->sl);
1931  }
1932 }
1933 
1934 static FORCEINLINE int recursive_acquire_lock(MLOCK_T *lk) {
1935  THREAD_ID_T mythreadid = CURRENT_THREAD;
1936  int spins = 0;
1937  for (;;) {
1938  if (*((volatile int *)(&lk->sl)) == 0) {
1939  if (!CAS_LOCK(&lk->sl)) {
1940  lk->threadid = mythreadid;
1941  lk->c = 1;
1942  return 0;
1943  }
1944  }
1945  else if (EQ_OWNER(lk->threadid, mythreadid)) {
1946  ++lk->c;
1947  return 0;
1948  }
1949  if ((++spins & SPINS_PER_YIELD) == 0) {
1950  SPIN_LOCK_YIELD;
1951  }
1952  }
1953 }
1954 
1955 static FORCEINLINE int recursive_try_lock(MLOCK_T *lk) {
1956  THREAD_ID_T mythreadid = CURRENT_THREAD;
1957  if (*((volatile int *)(&lk->sl)) == 0) {
1958  if (!CAS_LOCK(&lk->sl)) {
1959  lk->threadid = mythreadid;
1960  lk->c = 1;
1961  return 1;
1962  }
1963  }
1964  else if (EQ_OWNER(lk->threadid, mythreadid)) {
1965  ++lk->c;
1966  return 1;
1967  }
1968  return 0;
1969 }
1970 
1971 #define RELEASE_LOCK(lk) recursive_release_lock(lk)
1972 #define TRY_LOCK(lk) recursive_try_lock(lk)
1973 #define ACQUIRE_LOCK(lk) recursive_acquire_lock(lk)
1974 #define INITIAL_LOCK(lk) ((lk)->threadid = (THREAD_ID_T)0, (lk)->sl = 0, (lk)->c = 0)
1975 #define DESTROY_LOCK(lk) (0)
1976 #endif /* USE_RECURSIVE_LOCKS */
1977 
1978 #elif defined(WIN32) /* Win32 critical sections */
1979 #define MLOCK_T CRITICAL_SECTION
1980 #define ACQUIRE_LOCK(lk) (EnterCriticalSection(lk), 0)
1981 #define RELEASE_LOCK(lk) LeaveCriticalSection(lk)
1982 #define TRY_LOCK(lk) TryEnterCriticalSection(lk)
1983 #define INITIAL_LOCK(lk) (!InitializeCriticalSectionAndSpinCount((lk), 0x80000000|4000))
1984 #define DESTROY_LOCK(lk) (DeleteCriticalSection(lk), 0)
1985 #define NEED_GLOBAL_LOCK_INIT
1986 
1987 static MLOCK_T malloc_global_mutex;
1988 static volatile LONG malloc_global_mutex_status;
1989 
1990 /* Use spin loop to initialize global lock */
1991 static void init_malloc_global_mutex() {
1992  for (;;) {
1993  long stat = malloc_global_mutex_status;
1994  if (stat > 0)
1995  return;
1996  /* transition to < 0 while initializing, then to > 0) */
1997  if (stat == 0 &&
1998  interlockedcompareexchange(&malloc_global_mutex_status, (LONG)-1, (LONG)0) == 0) {
1999  InitializeCriticalSection(&malloc_global_mutex);
2000  interlockedexchange(&malloc_global_mutex_status, (LONG)1);
2001  return;
2002  }
2003  SleepEx(0, FALSE);
2004  }
2005 }
2006 
2007 #else /* pthreads-based locks */
2008 #define MLOCK_T pthread_mutex_t
2009 #define ACQUIRE_LOCK(lk) pthread_mutex_lock(lk)
2010 #define RELEASE_LOCK(lk) pthread_mutex_unlock(lk)
2011 #define TRY_LOCK(lk) (!pthread_mutex_trylock(lk))
2012 #define INITIAL_LOCK(lk) pthread_init_lock(lk)
2013 #define DESTROY_LOCK(lk) pthread_mutex_destroy(lk)
2014 
2015 #if defined(USE_RECURSIVE_LOCKS) && USE_RECURSIVE_LOCKS != 0 && defined(linux) && !defined(PTHREAD_MUTEX_RECURSIVE)
2016 /* Cope with old-style linux recursive lock initialization by adding */
2017 /* skipped internal declaration from pthread.h */
2018 extern int pthread_mutexattr_setkind_np __P ((pthread_mutexattr_t *__attr,
2019  int __kind));
2020 #define PTHREAD_MUTEX_RECURSIVE PTHREAD_MUTEX_RECURSIVE_NP
2021 #define pthread_mutexattr_settype(x,y) pthread_mutexattr_setkind_np(x,y)
2022 #endif /* USE_RECURSIVE_LOCKS ... */
2023 
2024 static MLOCK_T malloc_global_mutex = PTHREAD_MUTEX_INITIALIZER;
2025 
2026 static int pthread_init_lock (MLOCK_T *lk) {
2027  pthread_mutexattr_t attr;
2028  if (pthread_mutexattr_init(&attr)) return 1;
2029 #if defined(USE_RECURSIVE_LOCKS) && USE_RECURSIVE_LOCKS != 0
2030  if (pthread_mutexattr_settype(&attr, PTHREAD_MUTEX_RECURSIVE)) return 1;
2031 #endif
2032  if (pthread_mutex_init(lk, &attr)) return 1;
2033  if (pthread_mutexattr_destroy(&attr)) return 1;
2034  return 0;
2035 }
2036 
2037 #endif /* ... lock types ... */
2038 
2039 /* Common code for all lock types */
2040 #define USE_LOCK_BIT (2U)
2041 
2042 #ifndef ACQUIRE_MALLOC_GLOBAL_LOCK
2043 #define ACQUIRE_MALLOC_GLOBAL_LOCK() ACQUIRE_LOCK(&malloc_global_mutex);
2044 #endif
2045 
2046 #ifndef RELEASE_MALLOC_GLOBAL_LOCK
2047 #define RELEASE_MALLOC_GLOBAL_LOCK() RELEASE_LOCK(&malloc_global_mutex);
2048 #endif
2049 
2050 #endif /* USE_LOCKS */
2051 
2052 /* ----------------------- Chunk representations ------------------------ */
2053 
2054 /*
2055  (The following includes lightly edited explanations by Colin Plumb.)
2056 
2057  The malloc_chunk declaration below is misleading (but accurate and
2058  necessary). It declares a "view" into memory allowing access to
2059  necessary fields at known offsets from a given base.
2060 
2061  Chunks of memory are maintained using a `boundary tag' method as
2062  originally described by Knuth. (See the paper by Paul Wilson
2063  ftp://ftp.cs.utexas.edu/pub/garbage/allocsrv.ps for a survey of such
2064  techniques.) Sizes of free chunks are stored both in the front of
2065  each chunk and at the end. This makes consolidating fragmented
2066  chunks into bigger chunks fast. The head fields also hold bits
2067  representing whether chunks are free or in use.
2068 
2069  Here are some pictures to make it clearer. They are "exploded" to
2070  show that the state of a chunk can be thought of as extending from
2071  the high 31 bits of the head field of its header through the
2072  prev_foot and PINUSE_BIT bit of the following chunk header.
2073 
2074  A chunk that's in use looks like:
2075 
2076  chunk-> +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
2077  | Size of previous chunk (if P = 0) |
2078  +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
2079  +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+ |P|
2080  | Size of this chunk 1| +-+
2081  mem-> +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
2082  | |
2083  +- -+
2084  | |
2085  +- -+
2086  | :
2087  +- size - sizeof(size_t) available payload bytes -+
2088  : |
2089  chunk-> +- -+
2090  | |
2091  +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
2092  +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+ |1|
2093  | Size of next chunk (may or may not be in use) | +-+
2094  mem-> +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
2095 
2096  And if it's free, it looks like this:
2097 
2098  chunk-> +- -+
2099  | User payload (must be in use, or we would have merged!) |
2100  +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
2101  +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+ |P|
2102  | Size of this chunk 0| +-+
2103  mem-> +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
2104  | Next pointer |
2105  +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
2106  | Prev pointer |
2107  +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
2108  | :
2109  +- size - sizeof(struct chunk) unused bytes -+
2110  : |
2111  chunk-> +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
2112  | Size of this chunk |
2113  +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
2114  +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+ |0|
2115  | Size of next chunk (must be in use, or we would have merged)| +-+
2116  mem-> +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
2117  | :
2118  +- User payload -+
2119  : |
2120  +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
2121  |0|
2122  +-+
2123  Note that since we always merge adjacent free chunks, the chunks
2124  adjacent to a free chunk must be in use.
2125 
2126  Given a pointer to a chunk (which can be derived trivially from the
2127  payload pointer) we can, in O(1) time, find out whether the adjacent
2128  chunks are free, and if so, unlink them from the lists that they
2129  are on and merge them with the current chunk.
2130 
2131  Chunks always begin on even word boundaries, so the mem portion
2132  (which is returned to the user) is also on an even word boundary, and
2133  thus at least double-word aligned.
2134 
2135  The P (PINUSE_BIT) bit, stored in the unused low-order bit of the
2136  chunk size (which is always a multiple of two words), is an in-use
2137  bit for the *previous* chunk. If that bit is *clear*, then the
2138  word before the current chunk size contains the previous chunk
2139  size, and can be used to find the front of the previous chunk.
2140  The very first chunk allocated always has this bit set, preventing
2141  access to non-existent (or non-owned) memory. If pinuse is set for
2142  any given chunk, then you CANNOT determine the size of the
2143  previous chunk, and might even get a memory addressing fault when
2144  trying to do so.
2145 
2146  The C (CINUSE_BIT) bit, stored in the unused second-lowest bit of
2147  the chunk size redundantly records whether the current chunk is
2148  inuse (unless the chunk is mmapped). This redundancy enables usage
2149  checks within free and realloc, and reduces indirection when freeing
2150  and consolidating chunks.
2151 
2152  Each freshly allocated chunk must have both cinuse and pinuse set.
2153  That is, each allocated chunk borders either a previously allocated
2154  and still in-use chunk, or the base of its memory arena. This is
2155  ensured by making all allocations from the `lowest' part of any
2156  found chunk. Further, no free chunk physically borders another one,
2157  so each free chunk is known to be preceded and followed by either
2158  inuse chunks or the ends of memory.
2159 
2160  Note that the `foot' of the current chunk is actually represented
2161  as the prev_foot of the NEXT chunk. This makes it easier to
2162  deal with alignments etc but can be very confusing when trying
2163  to extend or adapt this code.
2164 
2165  The exceptions to all this are
2166 
2167  1. The special chunk `top' is the top-most available chunk (i.e.,
2168  the one bordering the end of available memory). It is treated
2169  specially. Top is never included in any bin, is used only if
2170  no other chunk is available, and is released back to the
2171  system if it is very large (see M_TRIM_THRESHOLD). In effect,
2172  the top chunk is treated as larger (and thus less well
2173  fitting) than any other available chunk. The top chunk
2174  doesn't update its trailing size field since there is no next
2175  contiguous chunk that would have to index off it. However,
2176  space is still allocated for it (TOP_FOOT_SIZE) to enable
2177  separation or merging when space is extended.
2178 
2179  3. Chunks allocated via mmap, have both cinuse and pinuse bits
2180  cleared in their head fields. Because they are allocated
2181  one-by-one, each must carry its own prev_foot field, which is
2182  also used to hold the offset this chunk has within its mmapped
2183  region, which is needed to preserve alignment. Each mmapped
2184  chunk is trailed by the first two fields of a fake next-chunk
2185  for sake of usage checks.
2186 
2187 */
2188 
2190  size_t prev_foot; /* Size of previous chunk (if free). */
2191  size_t head; /* Size and inuse bits. */
2192  struct malloc_chunk* fd; /* double links -- used only if free. */
2193  struct malloc_chunk* bk;
2194 };
2195 
2196 typedef struct malloc_chunk mchunk;
2197 typedef struct malloc_chunk* mchunkptr;
2198 typedef struct malloc_chunk* sbinptr; /* The type of bins of chunks */
2199 typedef unsigned int bindex_t; /* Described below */
2200 typedef unsigned int binmap_t; /* Described below */
2201 typedef unsigned int flag_t; /* The type of various bit flag sets */
2202 
2203 /* ------------------- Chunks sizes and alignments ----------------------- */
2204 
2205 #define MCHUNK_SIZE (sizeof(mchunk))
2206 
2207 #if FOOTERS
2208 #define CHUNK_OVERHEAD (TWO_SIZE_T_SIZES)
2209 #else /* FOOTERS */
2210 #define CHUNK_OVERHEAD (SIZE_T_SIZE)
2211 #endif /* FOOTERS */
2212 
2213 /* MMapped chunks need a second word of overhead ... */
2214 #define MMAP_CHUNK_OVERHEAD (TWO_SIZE_T_SIZES)
2215 /* ... and additional padding for fake next-chunk at foot */
2216 #define MMAP_FOOT_PAD (FOUR_SIZE_T_SIZES)
2217 
2218 /* The smallest size we can malloc is an aligned minimal chunk */
2219 #define MIN_CHUNK_SIZE\
2220  ((MCHUNK_SIZE + CHUNK_ALIGN_MASK) & ~CHUNK_ALIGN_MASK)
2221 
2222 /* conversion from malloc headers to user pointers, and back */
2223 #define chunk2mem(p) ((void*)((char*)(p) + TWO_SIZE_T_SIZES))
2224 #define mem2chunk(mem) ((mchunkptr)((char*)(mem) - TWO_SIZE_T_SIZES))
2225 /* chunk associated with aligned address A */
2226 #define align_as_chunk(A) (mchunkptr)((A) + align_offset(chunk2mem(A)))
2227 
2228 /* Bounds on request (not chunk) sizes. */
2229 #define MAX_REQUEST ((-MIN_CHUNK_SIZE) << 2)
2230 #define MIN_REQUEST (MIN_CHUNK_SIZE - CHUNK_OVERHEAD - SIZE_T_ONE)
2231 
2232 /* pad request bytes into a usable size */
2233 #define pad_request(req) \
2234  (((req) + CHUNK_OVERHEAD + CHUNK_ALIGN_MASK) & ~CHUNK_ALIGN_MASK)
2235 
2236 /* pad request, checking for minimum (but not maximum) */
2237 #define request2size(req) \
2238  (((req) < MIN_REQUEST)? MIN_CHUNK_SIZE : pad_request(req))
2239 
2240 
2241 /* ------------------ Operations on head and foot fields ----------------- */
2242 
2243 /*
2244  The head field of a chunk is or'ed with PINUSE_BIT when previous
2245  adjacent chunk in use, and or'ed with CINUSE_BIT if this chunk is in
2246  use, unless mmapped, in which case both bits are cleared.
2247 
2248  FLAG4_BIT is not used by this malloc, but might be useful in extensions.
2249 */
2250 
2251 #define PINUSE_BIT (SIZE_T_ONE)
2252 #define CINUSE_BIT (SIZE_T_TWO)
2253 #define FLAG4_BIT (SIZE_T_FOUR)
2254 #define INUSE_BITS (PINUSE_BIT|CINUSE_BIT)
2255 #define FLAG_BITS (PINUSE_BIT|CINUSE_BIT|FLAG4_BIT)
2256 
2257 /* Head value for fenceposts */
2258 #define FENCEPOST_HEAD (INUSE_BITS|SIZE_T_SIZE)
2259 
2260 /* extraction of fields from head words */
2261 #define cinuse(p) ((p)->head & CINUSE_BIT)
2262 #define pinuse(p) ((p)->head & PINUSE_BIT)
2263 #define flag4inuse(p) ((p)->head & FLAG4_BIT)
2264 #define is_inuse(p) (((p)->head & INUSE_BITS) != PINUSE_BIT)
2265 #define is_mmapped(p) (((p)->head & INUSE_BITS) == 0)
2266 
2267 #define chunksize(p) ((p)->head & ~(FLAG_BITS))
2268 
2269 #define clear_pinuse(p) ((p)->head &= ~PINUSE_BIT)
2270 #define set_flag4(p) ((p)->head |= FLAG4_BIT)
2271 #define clear_flag4(p) ((p)->head &= ~FLAG4_BIT)
2272 
2273 /* Treat space at ptr +/- offset as a chunk */
2274 #define chunk_plus_offset(p, s) ((mchunkptr)(((char*)(p)) + (s)))
2275 #define chunk_minus_offset(p, s) ((mchunkptr)(((char*)(p)) - (s)))
2276 
2277 /* Ptr to next or previous physical malloc_chunk. */
2278 #define next_chunk(p) ((mchunkptr)( ((char*)(p)) + ((p)->head & ~FLAG_BITS)))
2279 #define prev_chunk(p) ((mchunkptr)( ((char*)(p)) - ((p)->prev_foot) ))
2280 
2281 /* extract next chunk's pinuse bit */
2282 #define next_pinuse(p) ((next_chunk(p)->head) & PINUSE_BIT)
2283 
2284 /* Get/set size at footer */
2285 #define get_foot(p, s) (((mchunkptr)((char*)(p) + (s)))->prev_foot)
2286 #define set_foot(p, s) (((mchunkptr)((char*)(p) + (s)))->prev_foot = (s))
2287 
2288 /* Set size, pinuse bit, and foot */
2289 #define set_size_and_pinuse_of_free_chunk(p, s)\
2290  ((p)->head = (s|PINUSE_BIT), set_foot(p, s))
2291 
2292 /* Set size, pinuse bit, foot, and clear next pinuse */
2293 #define set_free_with_pinuse(p, s, n)\
2294  (clear_pinuse(n), set_size_and_pinuse_of_free_chunk(p, s))
2295 
2296 /* Get the internal overhead associated with chunk p */
2297 #define overhead_for(p)\
2298  (is_mmapped(p)? MMAP_CHUNK_OVERHEAD : CHUNK_OVERHEAD)
2299 
2300 /* Return true if malloced space is not necessarily cleared */
2301 #if MMAP_CLEARS
2302 #define calloc_must_clear(p) (!is_mmapped(p))
2303 #else /* MMAP_CLEARS */
2304 #define calloc_must_clear(p) (1)
2305 #endif /* MMAP_CLEARS */
2306 
2307 /* ---------------------- Overlaid data structures ----------------------- */
2308 
2309 /*
2310  When chunks are not in use, they are treated as nodes of either
2311  lists or trees.
2312 
2313  "Small" chunks are stored in circular doubly-linked lists, and look
2314  like this:
2315 
2316  chunk-> +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
2317  | Size of previous chunk |
2318  +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
2319  `head:' | Size of chunk, in bytes |P|
2320  mem-> +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
2321  | Forward pointer to next chunk in list |
2322  +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
2323  | Back pointer to previous chunk in list |
2324  +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
2325  | Unused space (may be 0 bytes long) .
2326  . .
2327  . |
2328 nextchunk-> +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
2329  `foot:' | Size of chunk, in bytes |
2330  +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
2331 
2332  Larger chunks are kept in a form of bitwise digital trees (aka
2333  tries) keyed on chunksizes. Because malloc_tree_chunks are only for
2334  free chunks greater than 256 bytes, their size doesn't impose any
2335  constraints on user chunk sizes. Each node looks like:
2336 
2337  chunk-> +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
2338  | Size of previous chunk |
2339  +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
2340  `head:' | Size of chunk, in bytes |P|
2341  mem-> +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
2342  | Forward pointer to next chunk of same size |
2343  +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
2344  | Back pointer to previous chunk of same size |
2345  +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
2346  | Pointer to left child (child[0]) |
2347  +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
2348  | Pointer to right child (child[1]) |
2349  +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
2350  | Pointer to parent |
2351  +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
2352  | bin index of this chunk |
2353  +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
2354  | Unused space .
2355  . |
2356 nextchunk-> +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
2357  `foot:' | Size of chunk, in bytes |
2358  +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
2359 
2360  Each tree holding treenodes is a tree of unique chunk sizes. Chunks
2361  of the same size are arranged in a circularly-linked list, with only
2362  the oldest chunk (the next to be used, in our FIFO ordering)
2363  actually in the tree. (Tree members are distinguished by a non-null
2364  parent pointer.) If a chunk with the same size an an existing node
2365  is inserted, it is linked off the existing node using pointers that
2366  work in the same way as fd/bk pointers of small chunks.
2367 
2368  Each tree contains a power of 2 sized range of chunk sizes (the
2369  smallest is 0x100 <= x < 0x180), which is is divided in half at each
2370  tree level, with the chunks in the smaller half of the range (0x100
2371  <= x < 0x140 for the top nose) in the left subtree and the larger
2372  half (0x140 <= x < 0x180) in the right subtree. This is, of course,
2373  done by inspecting individual bits.
2374 
2375  Using these rules, each node's left subtree contains all smaller
2376  sizes than its right subtree. However, the node at the root of each
2377  subtree has no particular ordering relationship to either. (The
2378  dividing line between the subtree sizes is based on trie relation.)
2379  If we remove the last chunk of a given size from the interior of the
2380  tree, we need to replace it with a leaf node. The tree ordering
2381  rules permit a node to be replaced by any leaf below it.
2382 
2383  The smallest chunk in a tree (a common operation in a best-fit
2384  allocator) can be found by walking a path to the leftmost leaf in
2385  the tree. Unlike a usual binary tree, where we follow left child
2386  pointers until we reach a null, here we follow the right child
2387  pointer any time the left one is null, until we reach a leaf with
2388  both child pointers null. The smallest chunk in the tree will be
2389  somewhere along that path.
2390 
2391  The worst case number of steps to add, find, or remove a node is
2392  bounded by the number of bits differentiating chunks within
2393  bins. Under current bin calculations, this ranges from 6 up to 21
2394  (for 32 bit sizes) or up to 53 (for 64 bit sizes). The typical case
2395  is of course much better.
2396 */
2397 
2399  /* The first four fields must be compatible with malloc_chunk */
2400  size_t prev_foot;
2401  size_t head;
2404 
2407  bindex_t index;
2408 };
2409 
2410 typedef struct malloc_tree_chunk tchunk;
2412 typedef struct malloc_tree_chunk* tbinptr; /* The type of bins of trees */
2413 
2414 /* A little helper macro for trees */
2415 #define leftmost_child(t) ((t)->child[0] != 0? (t)->child[0] : (t)->child[1])
2416 
2417 /* ----------------------------- Segments -------------------------------- */
2418 
2419 /*
2420  Each malloc space may include non-contiguous segments, held in a
2421  list headed by an embedded malloc_segment record representing the
2422  top-most space. Segments also include flags holding properties of
2423  the space. Large chunks that are directly allocated by mmap are not
2424  included in this list. They are instead independently created and
2425  destroyed without otherwise keeping track of them.
2426 
2427  Segment management mainly comes into play for spaces allocated by
2428  MMAP. Any call to MMAP might or might not return memory that is
2429  adjacent to an existing segment. MORECORE normally contiguously
2430  extends the current space, so this space is almost always adjacent,
2431  which is simpler and faster to deal with. (This is why MORECORE is
2432  used preferentially to MMAP when both are available -- see
2433  sys_alloc.) When allocating using MMAP, we don't use any of the
2434  hinting mechanisms (inconsistently) supported in various
2435  implementations of unix mmap, or distinguish reserving from
2436  committing memory. Instead, we just ask for space, and exploit
2437  contiguity when we get it. It is probably possible to do
2438  better than this on some systems, but no general scheme seems
2439  to be significantly better.
2440 
2441  Management entails a simpler variant of the consolidation scheme
2442  used for chunks to reduce fragmentation -- new adjacent memory is
2443  normally prepended or appended to an existing segment. However,
2444  there are limitations compared to chunk consolidation that mostly
2445  reflect the fact that segment processing is relatively infrequent
2446  (occurring only when getting memory from system) and that we
2447  don't expect to have huge numbers of segments:
2448 
2449  * Segments are not indexed, so traversal requires linear scans. (It
2450  would be possible to index these, but is not worth the extra
2451  overhead and complexity for most programs on most platforms.)
2452  * New segments are only appended to old ones when holding top-most
2453  memory; if they cannot be prepended to others, they are held in
2454  different segments.
2455 
2456  Except for the top-most segment of an mstate, each segment record
2457  is kept at the tail of its segment. Segments are added by pushing
2458  segment records onto the list headed by &mstate.seg for the
2459  containing mstate.
2460 
2461  Segment flags control allocation/merge/deallocation policies:
2462  * If EXTERN_BIT set, then we did not allocate this segment,
2463  and so should not try to deallocate or merge with others.
2464  (This currently holds only for the initial segment passed
2465  into create_mspace_with_base.)
2466  * If USE_MMAP_BIT set, the segment may be merged with
2467  other surrounding mmapped segments and trimmed/de-allocated
2468  using munmap.
2469  * If neither bit is set, then the segment was obtained using
2470  MORECORE so can be merged with surrounding MORECORE'd segments
2471  and deallocated/trimmed using MORECORE with negative arguments.
2472 */
2473 
2475  char* base; /* base address */
2476  size_t size; /* allocated size */
2477  struct malloc_segment* next; /* ptr to next segment */
2478  flag_t sflags; /* mmap and extern flag */
2479 };
2480 
2481 #define is_mmapped_segment(S) ((S)->sflags & USE_MMAP_BIT)
2482 #define is_extern_segment(S) ((S)->sflags & EXTERN_BIT)
2483 
2484 typedef struct malloc_segment msegment;
2485 typedef struct malloc_segment* msegmentptr;
2486 
2487 /* ---------------------------- malloc_state ----------------------------- */
2488 
2489 /*
2490  A malloc_state holds all of the bookkeeping for a space.
2491  The main fields are:
2492 
2493  Top
2494  The topmost chunk of the currently active segment. Its size is
2495  cached in topsize. The actual size of topmost space is
2496  topsize+TOP_FOOT_SIZE, which includes space reserved for adding
2497  fenceposts and segment records if necessary when getting more
2498  space from the system. The size at which to autotrim top is
2499  cached from mparams in trim_check, except that it is disabled if
2500  an autotrim fails.
2501 
2502  Designated victim (dv)
2503  This is the preferred chunk for servicing small requests that
2504  don't have exact fits. It is normally the chunk split off most
2505  recently to service another small request. Its size is cached in
2506  dvsize. The link fields of this chunk are not maintained since it
2507  is not kept in a bin.
2508 
2509  SmallBins
2510  An array of bin headers for free chunks. These bins hold chunks
2511  with sizes less than MIN_LARGE_SIZE bytes. Each bin contains
2512  chunks of all the same size, spaced 8 bytes apart. To simplify
2513  use in double-linked lists, each bin header acts as a malloc_chunk
2514  pointing to the real first node, if it exists (else pointing to
2515  itself). This avoids special-casing for headers. But to avoid
2516  waste, we allocate only the fd/bk pointers of bins, and then use
2517  repositioning tricks to treat these as the fields of a chunk.
2518 
2519  TreeBins
2520  Treebins are pointers to the roots of trees holding a range of
2521  sizes. There are 2 equally spaced treebins for each power of two
2522  from TREE_SHIFT to TREE_SHIFT+16. The last bin holds anything
2523  larger.
2524 
2525  Bin maps
2526  There is one bit map for small bins ("smallmap") and one for
2527  treebins ("treemap). Each bin sets its bit when non-empty, and
2528  clears the bit when empty. Bit operations are then used to avoid
2529  bin-by-bin searching -- nearly all "search" is done without ever
2530  looking at bins that won't be selected. The bit maps
2531  conservatively use 32 bits per map word, even if on 64bit system.
2532  For a good description of some of the bit-based techniques used
2533  here, see Henry S. Warren Jr's book "Hacker's Delight" (and
2534  supplement at http://hackersdelight.org/). Many of these are
2535  intended to reduce the branchiness of paths through malloc etc, as
2536  well as to reduce the number of memory locations read or written.
2537 
2538  Segments
2539  A list of segments headed by an embedded malloc_segment record
2540  representing the initial space.
2541 
2542  Address check support
2543  The least_addr field is the least address ever obtained from
2544  MORECORE or MMAP. Attempted frees and reallocs of any address less
2545  than this are trapped (unless INSECURE is defined).
2546 
2547  Magic tag
2548  A cross-check field that should always hold same value as mparams.magic.
2549 
2550  Max allowed footprint
2551  The maximum allowed bytes to allocate from system (zero means no limit)
2552 
2553  Flags
2554  Bits recording whether to use MMAP, locks, or contiguous MORECORE
2555 
2556  Statistics
2557  Each space keeps track of current and maximum system memory
2558  obtained via MORECORE or MMAP.
2559 
2560  Trim support
2561  Fields holding the amount of unused topmost memory that should trigger
2562  trimming, and a counter to force periodic scanning to release unused
2563  non-topmost segments.
2564 
2565  Locking
2566  If USE_LOCKS is defined, the "mutex" lock is acquired and released
2567  around every public call using this mspace.
2568 
2569  Extension support
2570  A void* pointer and a size_t field that can be used to help implement
2571  extensions to this malloc.
2572 */
2573 
2574 /* Bin types, widths and sizes */
2575 #define NSMALLBINS (32U)
2576 #define NTREEBINS (32U)
2577 #define SMALLBIN_SHIFT (3U)
2578 #define SMALLBIN_WIDTH (SIZE_T_ONE << SMALLBIN_SHIFT)
2579 #define TREEBIN_SHIFT (8U)
2580 #define MIN_LARGE_SIZE (SIZE_T_ONE << TREEBIN_SHIFT)
2581 #define MAX_SMALL_SIZE (MIN_LARGE_SIZE - SIZE_T_ONE)
2582 #define MAX_SMALL_REQUEST (MAX_SMALL_SIZE - CHUNK_ALIGN_MASK - CHUNK_OVERHEAD)
2583 
2585  binmap_t smallmap;
2586  binmap_t treemap;
2587  size_t dvsize;
2588  size_t topsize;
2589  char* least_addr;
2590  mchunkptr dv;
2591  mchunkptr top;
2592  size_t trim_check;
2594  size_t magic;
2595  mchunkptr smallbins[(NSMALLBINS+1)*2];
2597  size_t footprint;
2599  size_t footprint_limit; /* zero means no limit */
2600  flag_t mflags;
2601 #if USE_LOCKS
2602  MLOCK_T mutex; /* locate lock among fields that rarely change */
2603 #endif /* USE_LOCKS */
2605  void* extp; /* Unused but available for extensions */
2606  size_t exts;
2607 };
2608 
2609 typedef struct malloc_state* mstate;
2610 
2611 /* ------------- Global malloc_state and malloc_params ------------------- */
2612 
2613 /*
2614  malloc_params holds global properties, including those that can be
2615  dynamically set using mallopt. There is a single instance, mparams,
2616  initialized in init_mparams. Note that the non-zeroness of "magic"
2617  also serves as an initialization flag.
2618 */
2619 
2621  size_t magic;
2622  size_t page_size;
2623  size_t granularity;
2627 };
2628 
2629 static struct malloc_params mparams;
2630 
2631 /* Ensure mparams initialized */
2632 #define ensure_initialization() (void)(mparams.magic != 0 || init_mparams())
2633 
2634 #if !ONLY_MSPACES
2635 
2636 /* The global malloc_state used for all non-"mspace" calls */
2637 static struct malloc_state _gm_;
2638 #define gm (&_gm_)
2639 #define is_global(M) ((M) == &_gm_)
2640 
2641 #endif /* !ONLY_MSPACES */
2642 
2643 #define is_initialized(M) ((M)->top != 0)
2644 
2645 /* -------------------------- system alloc setup ------------------------- */
2646 
2647 /* Operations on mflags */
2648 
2649 #define use_lock(M) ((M)->mflags & USE_LOCK_BIT)
2650 #define enable_lock(M) ((M)->mflags |= USE_LOCK_BIT)
2651 #if USE_LOCKS
2652 #define disable_lock(M) ((M)->mflags &= ~USE_LOCK_BIT)
2653 #else
2654 #define disable_lock(M)
2655 #endif
2656 
2657 #define use_mmap(M) ((M)->mflags & USE_MMAP_BIT)
2658 #define enable_mmap(M) ((M)->mflags |= USE_MMAP_BIT)
2659 #if HAVE_MMAP
2660 #define disable_mmap(M) ((M)->mflags &= ~USE_MMAP_BIT)
2661 #else
2662 #define disable_mmap(M)
2663 #endif
2664 
2665 #define use_noncontiguous(M) ((M)->mflags & USE_NONCONTIGUOUS_BIT)
2666 #define disable_contiguous(M) ((M)->mflags |= USE_NONCONTIGUOUS_BIT)
2667 
2668 #define set_lock(M,L)\
2669  ((M)->mflags = (L)?\
2670  ((M)->mflags | USE_LOCK_BIT) :\
2671  ((M)->mflags & ~USE_LOCK_BIT))
2672 
2673 /* page-align a size */
2674 #define page_align(S)\
2675  (((S) + (mparams.page_size - SIZE_T_ONE)) & ~(mparams.page_size - SIZE_T_ONE))
2676 
2677 /* granularity-align a size */
2678 #define granularity_align(S)\
2679  (((S) + (mparams.granularity - SIZE_T_ONE))\
2680  & ~(mparams.granularity - SIZE_T_ONE))
2681 
2682 
2683 /* For mmap, use granularity alignment on windows, else page-align */
2684 #ifdef WIN32
2685 #define mmap_align(S) granularity_align(S)
2686 #else
2687 #define mmap_align(S) page_align(S)
2688 #endif
2689 
2690 /* For sys_alloc, enough padding to ensure can malloc request on success */
2691 #define SYS_ALLOC_PADDING (TOP_FOOT_SIZE + MALLOC_ALIGNMENT)
2692 
2693 #define is_page_aligned(S)\
2694  (((size_t)(S) & (mparams.page_size - SIZE_T_ONE)) == 0)
2695 #define is_granularity_aligned(S)\
2696  (((size_t)(S) & (mparams.granularity - SIZE_T_ONE)) == 0)
2697 
2698 /* True if segment S holds address A */
2699 #define segment_holds(S, A)\
2700  ((char*)(A) >= S->base && (char*)(A) < S->base + S->size)
2701 
2702 /* Return segment holding given address */
2703 static msegmentptr segment_holding(mstate m, char* addr) {
2704  msegmentptr sp = &m->seg;
2705  for (;;) {
2706  if (addr >= sp->base && addr < sp->base + sp->size)
2707  return sp;
2708  if ((sp = sp->next) == 0)
2709  return 0;
2710  }
2711 }
2712 
2713 /* Return true if segment contains a segment link */
2714 static int has_segment_link(mstate m, msegmentptr ss) {
2715  msegmentptr sp = &m->seg;
2716  for (;;) {
2717  if ((char*)sp >= ss->base && (char*)sp < ss->base + ss->size)
2718  return 1;
2719  if ((sp = sp->next) == 0)
2720  return 0;
2721  }
2722 }
2723 
2724 #ifndef MORECORE_CANNOT_TRIM
2725 #define should_trim(M,s) ((s) > (M)->trim_check)
2726 #else /* MORECORE_CANNOT_TRIM */
2727 #define should_trim(M,s) (0)
2728 #endif /* MORECORE_CANNOT_TRIM */
2729 
2730 /*
2731  TOP_FOOT_SIZE is padding at the end of a segment, including space
2732  that may be needed to place segment records and fenceposts when new
2733  noncontiguous segments are added.
2734 */
2735 #define TOP_FOOT_SIZE\
2736  (align_offset(chunk2mem(0))+pad_request(sizeof(struct malloc_segment))+MIN_CHUNK_SIZE)
2737 
2738 
2739 /* ------------------------------- Hooks -------------------------------- */
2740 
2741 /*
2742  PREACTION should be defined to return 0 on success, and nonzero on
2743  failure. If you are not using locking, you can redefine these to do
2744  anything you like.
2745 */
2746 
2747 #if USE_LOCKS
2748 #define PREACTION(M) ((use_lock(M))? ACQUIRE_LOCK(&(M)->mutex) : 0)
2749 #define POSTACTION(M) { if (use_lock(M)) RELEASE_LOCK(&(M)->mutex); }
2750 #else /* USE_LOCKS */
2751 
2752 #ifndef PREACTION
2753 #define PREACTION(M) (0)
2754 #endif /* PREACTION */
2755 
2756 #ifndef POSTACTION
2757 #define POSTACTION(M)
2758 #endif /* POSTACTION */
2759 
2760 #endif /* USE_LOCKS */
2761 
2762 /*
2763  CORRUPTION_ERROR_ACTION is triggered upon detected bad addresses.
2764  USAGE_ERROR_ACTION is triggered on detected bad frees and
2765  reallocs. The argument p is an address that might have triggered the
2766  fault. It is ignored by the two predefined actions, but might be
2767  useful in custom actions that try to help diagnose errors.
2768 */
2769 
2770 #if PROCEED_ON_ERROR
2771 
2772 /* A count of the number of corruption errors causing resets */
2773 int malloc_corruption_error_count;
2774 
2775 /* default corruption action */
2776 static void reset_on_error(mstate m);
2777 
2778 #define CORRUPTION_ERROR_ACTION(m) reset_on_error(m)
2779 #define USAGE_ERROR_ACTION(m, p)
2780 
2781 #else /* PROCEED_ON_ERROR */
2782 
2783 #ifndef CORRUPTION_ERROR_ACTION
2784 #define CORRUPTION_ERROR_ACTION(m) ABORT
2785 #endif /* CORRUPTION_ERROR_ACTION */
2786 
2787 #ifndef USAGE_ERROR_ACTION
2788 #define USAGE_ERROR_ACTION(m,p) ABORT
2789 #endif /* USAGE_ERROR_ACTION */
2790 
2791 #endif /* PROCEED_ON_ERROR */
2792 
2793 
2794 /* -------------------------- Debugging setup ---------------------------- */
2795 
2796 #if ! DEBUG
2797 
2798 #define check_free_chunk(M,P)
2799 #define check_inuse_chunk(M,P)
2800 #define check_malloced_chunk(M,P,N)
2801 #define check_mmapped_chunk(M,P)
2802 #define check_malloc_state(M)
2803 #define check_top_chunk(M,P)
2804 
2805 #else /* DEBUG */
2806 #define check_free_chunk(M,P) do_check_free_chunk(M,P)
2807 #define check_inuse_chunk(M,P) do_check_inuse_chunk(M,P)
2808 #define check_top_chunk(M,P) do_check_top_chunk(M,P)
2809 #define check_malloced_chunk(M,P,N) do_check_malloced_chunk(M,P,N)
2810 #define check_mmapped_chunk(M,P) do_check_mmapped_chunk(M,P)
2811 #define check_malloc_state(M) do_check_malloc_state(M)
2812 
2813 static void do_check_any_chunk(mstate m, mchunkptr p);
2814 static void do_check_top_chunk(mstate m, mchunkptr p);
2815 static void do_check_mmapped_chunk(mstate m, mchunkptr p);
2816 static void do_check_inuse_chunk(mstate m, mchunkptr p);
2817 static void do_check_free_chunk(mstate m, mchunkptr p);
2818 static void do_check_malloced_chunk(mstate m, void* mem, size_t s);
2819 static void do_check_tree(mstate m, tchunkptr t);
2820 static void do_check_treebin(mstate m, bindex_t i);
2821 static void do_check_smallbin(mstate m, bindex_t i);
2822 static void do_check_malloc_state(mstate m);
2823 static int bin_find(mstate m, mchunkptr x);
2824 static size_t traverse_and_check(mstate m);
2825 #endif /* DEBUG */
2826 
2827 /* ---------------------------- Indexing Bins ---------------------------- */
2828 
2829 #define is_small(s) (((s) >> SMALLBIN_SHIFT) < NSMALLBINS)
2830 #define small_index(s) (bindex_t)((s) >> SMALLBIN_SHIFT)
2831 #define small_index2size(i) ((i) << SMALLBIN_SHIFT)
2832 #define MIN_SMALL_INDEX (small_index(MIN_CHUNK_SIZE))
2833 
2834 /* addressing by index. See above about smallbin repositioning */
2835 #define smallbin_at(M, i) ((sbinptr)((char*)&((M)->smallbins[(i)<<1])))
2836 #define treebin_at(M,i) (&((M)->treebins[i]))
2837 
2838 /* assign tree index for size S to variable I. Use x86 asm if possible */
2839 #if defined(__GNUC__) && (defined(__i386__) || defined(__x86_64__))
2840 #define compute_tree_index(S, I)\
2841 {\
2842  unsigned int X = S >> TREEBIN_SHIFT;\
2843  if (X == 0)\
2844  I = 0;\
2845  else if (X > 0xFFFF)\
2846  I = NTREEBINS-1;\
2847  else {\
2848  unsigned int K = (unsigned) sizeof(X)*__CHAR_BIT__ - 1 - (unsigned) __builtin_clz(X); \
2849  I = (bindex_t)((K << 1) + ((S >> (K + (TREEBIN_SHIFT-1)) & 1)));\
2850  }\
2851 }
2852 
2853 #elif defined (__INTEL_COMPILER)
2854 #define compute_tree_index(S, I)\
2855 {\
2856  size_t X = S >> TREEBIN_SHIFT;\
2857  if (X == 0)\
2858  I = 0;\
2859  else if (X > 0xFFFF)\
2860  I = NTREEBINS-1;\
2861  else {\
2862  unsigned int K = _bit_scan_reverse (X); \
2863  I = (bindex_t)((K << 1) + ((S >> (K + (TREEBIN_SHIFT-1)) & 1)));\
2864  }\
2865 }
2866 
2867 #elif defined(_MSC_VER) && _MSC_VER>=1300
2868 #define compute_tree_index(S, I)\
2869 {\
2870  size_t X = S >> TREEBIN_SHIFT;\
2871  if (X == 0)\
2872  I = 0;\
2873  else if (X > 0xFFFF)\
2874  I = NTREEBINS-1;\
2875  else {\
2876  unsigned int K;\
2877  _BitScanReverse((DWORD *) &K, (DWORD) X);\
2878  I = (bindex_t)((K << 1) + ((S >> (K + (TREEBIN_SHIFT-1)) & 1)));\
2879  }\
2880 }
2881 
2882 #else /* GNUC */
2883 #define compute_tree_index(S, I)\
2884 {\
2885  size_t X = S >> TREEBIN_SHIFT;\
2886  if (X == 0)\
2887  I = 0;\
2888  else if (X > 0xFFFF)\
2889  I = NTREEBINS-1;\
2890  else {\
2891  unsigned int Y = (unsigned int)X;\
2892  unsigned int N = ((Y - 0x100) >> 16) & 8;\
2893  unsigned int K = (((Y <<= N) - 0x1000) >> 16) & 4;\
2894  N += K;\
2895  N += K = (((Y <<= K) - 0x4000) >> 16) & 2;\
2896  K = 14 - N + ((Y <<= K) >> 15);\
2897  I = (K << 1) + ((S >> (K + (TREEBIN_SHIFT-1)) & 1));\
2898  }\
2899 }
2900 #endif /* GNUC */
2901 
2902 /* Bit representing maximum resolved size in a treebin at i */
2903 #define bit_for_tree_index(i) \
2904  (i == NTREEBINS-1)? (SIZE_T_BITSIZE-1) : (((i) >> 1) + TREEBIN_SHIFT - 2)
2905 
2906 /* Shift placing maximum resolved bit in a treebin at i as sign bit */
2907 #define leftshift_for_tree_index(i) \
2908  ((i == NTREEBINS-1)? 0 : \
2909  ((SIZE_T_BITSIZE-SIZE_T_ONE) - (((i) >> 1) + TREEBIN_SHIFT - 2)))
2910 
2911 /* The size of the smallest chunk held in bin with index i */
2912 #define minsize_for_tree_index(i) \
2913  ((SIZE_T_ONE << (((i) >> 1) + TREEBIN_SHIFT)) | \
2914  (((size_t)((i) & SIZE_T_ONE)) << (((i) >> 1) + TREEBIN_SHIFT - 1)))
2915 
2916 
2917 /* ------------------------ Operations on bin maps ----------------------- */
2918 
2919 /* bit corresponding to given index */
2920 #define idx2bit(i) ((binmap_t)(1) << (i))
2921 
2922 /* Mark/Clear bits with given index */
2923 #define mark_smallmap(M,i) ((M)->smallmap |= idx2bit(i))
2924 #define clear_smallmap(M,i) ((M)->smallmap &= ~idx2bit(i))
2925 #define smallmap_is_marked(M,i) ((M)->smallmap & idx2bit(i))
2926 
2927 #define mark_treemap(M,i) ((M)->treemap |= idx2bit(i))
2928 #define clear_treemap(M,i) ((M)->treemap &= ~idx2bit(i))
2929 #define treemap_is_marked(M,i) ((M)->treemap & idx2bit(i))
2930 
2931 /* isolate the least set bit of a bitmap */
2932 #define least_bit(x) ((x) & -(x))
2933 
2934 /* mask with all bits to left of least bit of x on */
2935 #define left_bits(x) ((x<<1) | -(x<<1))
2936 
2937 /* mask with all bits to left of or equal to least bit of x on */
2938 #define same_or_left_bits(x) ((x) | -(x))
2939 
2940 /* index corresponding to given bit. Use x86 asm if possible */
2941 
2942 #if defined(__GNUC__) && (defined(__i386__) || defined(__x86_64__))
2943 #define compute_bit2idx(X, I)\
2944 {\
2945  unsigned int J;\
2946  J = __builtin_ctz(X); \
2947  I = (bindex_t)J;\
2948 }
2949 
2950 #elif defined (__INTEL_COMPILER)
2951 #define compute_bit2idx(X, I)\
2952 {\
2953  unsigned int J;\
2954  J = _bit_scan_forward (X); \
2955  I = (bindex_t)J;\
2956 }
2957 
2958 #elif defined(_MSC_VER) && _MSC_VER>=1300
2959 #define compute_bit2idx(X, I)\
2960 {\
2961  unsigned int J;\
2962  _BitScanForward((DWORD *) &J, X);\
2963  I = (bindex_t)J;\
2964 }
2965 
2966 #elif USE_BUILTIN_FFS
2967 #define compute_bit2idx(X, I) I = ffs(X)-1
2968 
2969 #else
2970 #define compute_bit2idx(X, I)\
2971 {\
2972  unsigned int Y = X - 1;\
2973  unsigned int K = Y >> (16-4) & 16;\
2974  unsigned int N = K; Y >>= K;\
2975  N += K = Y >> (8-3) & 8; Y >>= K;\
2976  N += K = Y >> (4-2) & 4; Y >>= K;\
2977  N += K = Y >> (2-1) & 2; Y >>= K;\
2978  N += K = Y >> (1-0) & 1; Y >>= K;\
2979  I = (bindex_t)(N + Y);\
2980 }
2981 #endif /* GNUC */
2982 
2983 
2984 /* ----------------------- Runtime Check Support ------------------------- */
2985 
2986 /*
2987  For security, the main invariant is that malloc/free/etc never
2988  writes to a static address other than malloc_state, unless static
2989  malloc_state itself has been corrupted, which cannot occur via
2990  malloc (because of these checks). In essence this means that we
2991  believe all pointers, sizes, maps etc held in malloc_state, but
2992  check all of those linked or offsetted from other embedded data
2993  structures. These checks are interspersed with main code in a way
2994  that tends to minimize their run-time cost.
2995 
2996  When FOOTERS is defined, in addition to range checking, we also
2997  verify footer fields of inuse chunks, which can be used guarantee
2998  that the mstate controlling malloc/free is intact. This is a
2999  streamlined version of the approach described by William Robertson
3000  et al in "Run-time Detection of Heap-based Overflows" LISA'03
3001  http://www.usenix.org/events/lisa03/tech/robertson.html The footer
3002  of an inuse chunk holds the xor of its mstate and a random seed,
3003  that is checked upon calls to free() and realloc(). This is
3004  (probabalistically) unguessable from outside the program, but can be
3005  computed by any code successfully malloc'ing any chunk, so does not
3006  itself provide protection against code that has already broken
3007  security through some other means. Unlike Robertson et al, we
3008  always dynamically check addresses of all offset chunks (previous,
3009  next, etc). This turns out to be cheaper than relying on hashes.
3010 */
3011 
3012 #if !INSECURE
3013 /* Check if address a is at least as high as any from MORECORE or MMAP */
3014 #define ok_address(M, a) ((char*)(a) >= (M)->least_addr)
3015 /* Check if address of next chunk n is higher than base chunk p */
3016 #define ok_next(p, n) ((char*)(p) < (char*)(n))
3017 /* Check if p has inuse status */
3018 #define ok_inuse(p) is_inuse(p)
3019 /* Check if p has its pinuse bit on */
3020 #define ok_pinuse(p) pinuse(p)
3021 
3022 #else /* !INSECURE */
3023 #define ok_address(M, a) (1)
3024 #define ok_next(b, n) (1)
3025 #define ok_inuse(p) (1)
3026 #define ok_pinuse(p) (1)
3027 #endif /* !INSECURE */
3028 
3029 #if (FOOTERS && !INSECURE)
3030 /* Check if (alleged) mstate m has expected magic field */
3031 #define ok_magic(M) ((M)->magic == mparams.magic)
3032 #else /* (FOOTERS && !INSECURE) */
3033 #define ok_magic(M) (1)
3034 #endif /* (FOOTERS && !INSECURE) */
3035 
3036 /* In gcc, use __builtin_expect to minimize impact of checks */
3037 #if !INSECURE
3038 #if defined(__GNUC__) && __GNUC__ >= 3
3039 #define RTCHECK(e) __builtin_expect(e, 1)
3040 #else /* GNUC */
3041 #define RTCHECK(e) (e)
3042 #endif /* GNUC */
3043 #else /* !INSECURE */
3044 #define RTCHECK(e) (1)
3045 #endif /* !INSECURE */
3046 
3047 /* macros to set up inuse chunks with or without footers */
3048 
3049 #if !FOOTERS
3050 
3051 #define mark_inuse_foot(M,p,s)
3052 
3053 /* Macros for setting head/foot of non-mmapped chunks */
3054 
3055 /* Set cinuse bit and pinuse bit of next chunk */
3056 #define set_inuse(M,p,s)\
3057  ((p)->head = (((p)->head & PINUSE_BIT)|s|CINUSE_BIT),\
3058  ((mchunkptr)(((char*)(p)) + (s)))->head |= PINUSE_BIT)
3059 
3060 /* Set cinuse and pinuse of this chunk and pinuse of next chunk */
3061 #define set_inuse_and_pinuse(M,p,s)\
3062  ((p)->head = (s|PINUSE_BIT|CINUSE_BIT),\
3063  ((mchunkptr)(((char*)(p)) + (s)))->head |= PINUSE_BIT)
3064 
3065 /* Set size, cinuse and pinuse bit of this chunk */
3066 #define set_size_and_pinuse_of_inuse_chunk(M, p, s)\
3067  ((p)->head = (s|PINUSE_BIT|CINUSE_BIT))
3068 
3069 #else /* FOOTERS */
3070 
3071 /* Set foot of inuse chunk to be xor of mstate and seed */
3072 #define mark_inuse_foot(M,p,s)\
3073  (((mchunkptr)((char*)(p) + (s)))->prev_foot = ((size_t)(M) ^ mparams.magic))
3074 
3075 #define get_mstate_for(p)\
3076  ((mstate)(((mchunkptr)((char*)(p) +\
3077  (chunksize(p))))->prev_foot ^ mparams.magic))
3078 
3079 #define set_inuse(M,p,s)\
3080  ((p)->head = (((p)->head & PINUSE_BIT)|s|CINUSE_BIT),\
3081  (((mchunkptr)(((char*)(p)) + (s)))->head |= PINUSE_BIT), \
3082  mark_inuse_foot(M,p,s))
3083 
3084 #define set_inuse_and_pinuse(M,p,s)\
3085  ((p)->head = (s|PINUSE_BIT|CINUSE_BIT),\
3086  (((mchunkptr)(((char*)(p)) + (s)))->head |= PINUSE_BIT),\
3087  mark_inuse_foot(M,p,s))
3088 
3089 #define set_size_and_pinuse_of_inuse_chunk(M, p, s)\
3090  ((p)->head = (s|PINUSE_BIT|CINUSE_BIT),\
3091  mark_inuse_foot(M, p, s))
3092 
3093 #endif /* !FOOTERS */
3094 
3095 /* ---------------------------- setting mparams -------------------------- */
3096 
3097 #if LOCK_AT_FORK
3098 static void pre_fork(void) { ACQUIRE_LOCK(&(gm)->mutex); }
3099 static void post_fork_parent(void) { RELEASE_LOCK(&(gm)->mutex); }
3100 static void post_fork_child(void) { INITIAL_LOCK(&(gm)->mutex); }
3101 #endif /* LOCK_AT_FORK */
3102 
3103 /* Initialize mparams */
3104 static int init_mparams(void) {
3105 #ifdef NEED_GLOBAL_LOCK_INIT
3106  if (malloc_global_mutex_status <= 0)
3107  init_malloc_global_mutex();
3108 #endif
3109 
3111  if (mparams.magic == 0) {
3112  size_t magic;
3113  size_t psize;
3114  size_t gsize;
3115 
3116 #ifndef WIN32
3117  psize = malloc_getpagesize;
3118  gsize = ((DEFAULT_GRANULARITY != 0)? DEFAULT_GRANULARITY : psize);
3119 #else /* WIN32 */
3120  {
3121  SYSTEM_INFO system_info;
3122  GetSystemInfo(&system_info);
3123  psize = system_info.dwPageSize;
3124  gsize = ((DEFAULT_GRANULARITY != 0)?
3125  DEFAULT_GRANULARITY : system_info.dwAllocationGranularity);
3126  }
3127 #endif /* WIN32 */
3128 
3129  /* Sanity-check configuration:
3130  size_t must be unsigned and as wide as pointer type.
3131  ints must be at least 4 bytes.
3132  alignment must be at least 8.
3133  Alignment, min chunk size, and page size must all be powers of 2.
3134  */
3135  if ((sizeof(size_t) != sizeof(char*)) ||
3136  (MAX_SIZE_T < MIN_CHUNK_SIZE) ||
3137  (sizeof(int) < 4) ||
3138  (MALLOC_ALIGNMENT < (size_t)8U) ||
3140  ((MCHUNK_SIZE & (MCHUNK_SIZE-SIZE_T_ONE)) != 0) ||
3141  ((gsize & (gsize-SIZE_T_ONE)) != 0) ||
3142  ((psize & (psize-SIZE_T_ONE)) != 0))
3143  ABORT;
3144  mparams.granularity = gsize;
3145  mparams.page_size = psize;
3148 #if MORECORE_CONTIGUOUS
3150 #else /* MORECORE_CONTIGUOUS */
3152 #endif /* MORECORE_CONTIGUOUS */
3153 
3154 #if !ONLY_MSPACES
3155  /* Set up lock for main malloc area */
3156  gm->mflags = mparams.default_mflags;
3157  (void)INITIAL_LOCK(&gm->mutex);
3158 #endif
3159 #if LOCK_AT_FORK
3160  pthread_atfork(&pre_fork, &post_fork_parent, &post_fork_child);
3161 #endif
3162 
3163  {
3164 #if USE_DEV_RANDOM
3165  int fd;
3166  unsigned char buf[sizeof(size_t)];
3167  /* Try to use /dev/urandom, else fall back on using time */
3168  if ((fd = open("/dev/urandom", O_RDONLY)) >= 0 &&
3169  read(fd, buf, sizeof(buf)) == sizeof(buf)) {
3170  magic = *((size_t *) buf);
3171  close(fd);
3172  }
3173  else
3174 #endif /* USE_DEV_RANDOM */
3175 #ifdef WIN32
3176  magic = (size_t)(GetTickCount() ^ (size_t)0x55555555U);
3177 #elif defined(LACKS_TIME_H)
3178  magic = (size_t)&magic ^ (size_t)0x55555555U;
3179 #else
3180  magic = (size_t)(time(0) ^ (size_t)0x55555555U);
3181 #endif
3182  magic |= (size_t)8U; /* ensure nonzero */
3183  magic &= ~(size_t)7U; /* improve chances of fault for bad values */
3184  /* Until memory modes commonly available, use volatile-write */
3185  (*(volatile size_t *)(&(mparams.magic))) = magic;
3186  }
3187  }
3188 
3190  return 1;
3191 }
3192 
3193 /* support for mallopt */
3194 static int change_mparam(int param_number, int value) {
3195  size_t val;
3197  val = (value == -1)? MAX_SIZE_T : (size_t)value;
3198  switch(param_number) {
3199  case M_TRIM_THRESHOLD:
3200  mparams.trim_threshold = val;
3201  return 1;
3202  case M_GRANULARITY:
3203  if (val >= mparams.page_size && ((val & (val-1)) == 0)) {
3204  mparams.granularity = val;
3205  return 1;
3206  }
3207  else
3208  return 0;
3209  case M_MMAP_THRESHOLD:
3210  mparams.mmap_threshold = val;
3211  return 1;
3212  default:
3213  return 0;
3214  }
3215 }
3216 
3217 #if DEBUG
3218 /* ------------------------- Debugging Support --------------------------- */
3219 
3220 /* Check properties of any chunk, whether free, inuse, mmapped etc */
3221 static void do_check_any_chunk(mstate m, mchunkptr p) {
3222  assert((is_aligned(chunk2mem(p))) || (p->head == FENCEPOST_HEAD));
3223  assert(ok_address(m, p));
3224 }
3225 
3226 /* Check properties of top chunk */
3227 static void do_check_top_chunk(mstate m, mchunkptr p) {
3228  msegmentptr sp = segment_holding(m, (char*)p);
3229  size_t sz = p->head & ~INUSE_BITS; /* third-lowest bit can be set! */
3230  assert(sp != 0);
3231  assert((is_aligned(chunk2mem(p))) || (p->head == FENCEPOST_HEAD));
3232  assert(ok_address(m, p));
3233  assert(sz == m->topsize);
3234  assert(sz > 0);
3235  assert(sz == ((sp->base + sp->size) - (char*)p) - TOP_FOOT_SIZE);
3236  assert(pinuse(p));
3237  assert(!pinuse(chunk_plus_offset(p, sz)));
3238 }
3239 
3240 /* Check properties of (inuse) mmapped chunks */
3241 static void do_check_mmapped_chunk(mstate m, mchunkptr p) {
3242  size_t sz = chunksize(p);
3243  size_t len = (sz + (p->prev_foot) + MMAP_FOOT_PAD);
3244  assert(is_mmapped(p));
3245  assert(use_mmap(m));
3246  assert((is_aligned(chunk2mem(p))) || (p->head == FENCEPOST_HEAD));
3247  assert(ok_address(m, p));
3248  assert(!is_small(sz));
3249  assert((len & (mparams.page_size-SIZE_T_ONE)) == 0);
3250  assert(chunk_plus_offset(p, sz)->head == FENCEPOST_HEAD);
3251  assert(chunk_plus_offset(p, sz+SIZE_T_SIZE)->head == 0);
3252 }
3253 
3254 /* Check properties of inuse chunks */
3255 static void do_check_inuse_chunk(mstate m, mchunkptr p) {
3256  do_check_any_chunk(m, p);
3257  assert(is_inuse(p));
3258  assert(next_pinuse(p));
3259  /* If not pinuse and not mmapped, previous chunk has OK offset */
3260  assert(is_mmapped(p) || pinuse(p) || next_chunk(prev_chunk(p)) == p);
3261  if (is_mmapped(p))
3262  do_check_mmapped_chunk(m, p);
3263 }
3264 
3265 /* Check properties of free chunks */
3266 static void do_check_free_chunk(mstate m, mchunkptr p) {
3267  size_t sz = chunksize(p);
3268  mchunkptr next = chunk_plus_offset(p, sz);
3269  do_check_any_chunk(m, p);
3270  assert(!is_inuse(p));
3271  assert(!next_pinuse(p));
3272  assert (!is_mmapped(p));
3273  if (p != m->dv && p != m->top) {
3274  if (sz >= MIN_CHUNK_SIZE) {
3275  assert((sz & CHUNK_ALIGN_MASK) == 0);
3277  assert(next->prev_foot == sz);
3278  assert(pinuse(p));
3279  assert (next == m->top || is_inuse(next));
3280  assert(p->fd->bk == p);
3281  assert(p->bk->fd == p);
3282  }
3283  else /* markers are always of size SIZE_T_SIZE */
3284  assert(sz == SIZE_T_SIZE);
3285  }
3286 }
3287 
3288 /* Check properties of malloced chunks at the point they are malloced */
3289 static void do_check_malloced_chunk(mstate m, void* mem, size_t s) {
3290  if (mem != 0) {
3291  mchunkptr p = mem2chunk(mem);
3292  size_t sz = p->head & ~INUSE_BITS;
3293  do_check_inuse_chunk(m, p);
3294  assert((sz & CHUNK_ALIGN_MASK) == 0);
3295  assert(sz >= MIN_CHUNK_SIZE);
3296  assert(sz >= s);
3297  /* unless mmapped, size is less than MIN_CHUNK_SIZE more than request */
3298  assert(is_mmapped(p) || sz < (s + MIN_CHUNK_SIZE));
3299  }
3300 }
3301 
3302 /* Check a tree and its subtrees. */
3303 static void do_check_tree(mstate m, tchunkptr t) {
3304  tchunkptr head = 0;
3305  tchunkptr u = t;
3306  bindex_t tindex = t->index;
3307  size_t tsize = chunksize(t);
3308  bindex_t idx;
3309  compute_tree_index(tsize, idx);
3310  assert(tindex == idx);
3311  assert(tsize >= MIN_LARGE_SIZE);
3312  assert(tsize >= minsize_for_tree_index(idx));
3313  assert((idx == NTREEBINS-1) || (tsize < minsize_for_tree_index((idx+1))));
3314 
3315  do { /* traverse through chain of same-sized nodes */
3316  do_check_any_chunk(m, ((mchunkptr)u));
3317  assert(u->index == tindex);
3318  assert(chunksize(u) == tsize);
3319  assert(!is_inuse(u));
3320  assert(!next_pinuse(u));
3321  assert(u->fd->bk == u);
3322  assert(u->bk->fd == u);
3323  if (u->parent == 0) {
3324  assert(u->child[0] == 0);
3325  assert(u->child[1] == 0);
3326  }
3327  else {
3328  assert(head == 0); /* only one node on chain has parent */
3329  head = u;
3330  assert(u->parent != u);
3331  assert (u->parent->child[0] == u ||
3332  u->parent->child[1] == u ||
3333  *((tbinptr*)(u->parent)) == u);
3334  if (u->child[0] != 0) {
3335  assert(u->child[0]->parent == u);
3336  assert(u->child[0] != u);
3337  do_check_tree(m, u->child[0]);
3338  }
3339  if (u->child[1] != 0) {
3340  assert(u->child[1]->parent == u);
3341  assert(u->child[1] != u);
3342  do_check_tree(m, u->child[1]);
3343  }
3344  if (u->child[0] != 0 && u->child[1] != 0) {
3345  assert(chunksize(u->child[0]) < chunksize(u->child[1]));
3346  }
3347  }
3348  u = u->fd;
3349  } while (u != t);
3350  assert(head != 0);
3351 }
3352 
3353 /* Check all the chunks in a treebin. */
3354 static void do_check_treebin(mstate m, bindex_t i) {
3355  tbinptr* tb = treebin_at(m, i);
3356  tchunkptr t = *tb;
3357  int empty = (m->treemap & (1U << i)) == 0;
3358  if (t == 0)
3359  assert(empty);
3360  if (!empty)
3361  do_check_tree(m, t);
3362 }
3363 
3364 /* Check all the chunks in a smallbin. */
3365 static void do_check_smallbin(mstate m, bindex_t i) {
3366  sbinptr b = smallbin_at(m, i);
3367  mchunkptr p = b->bk;
3368  unsigned int empty = (m->smallmap & (1U << i)) == 0;
3369  if (p == b)
3370  assert(empty);
3371  if (!empty) {
3372  for (; p != b; p = p->bk) {
3373  size_t size = chunksize(p);
3374  mchunkptr q;
3375  /* each chunk claims to be free */
3376  do_check_free_chunk(m, p);
3377  /* chunk belongs in bin */
3378  assert(small_index(size) == i);
3379  assert(p->bk == b || chunksize(p->bk) == chunksize(p));
3380  /* chunk is followed by an inuse chunk */
3381  q = next_chunk(p);
3382  if (q->head != FENCEPOST_HEAD)
3383  do_check_inuse_chunk(m, q);
3384  }
3385  }
3386 }
3387 
3388 /* Find x in a bin. Used in other check functions. */
3389 static int bin_find(mstate m, mchunkptr x) {
3390  size_t size = chunksize(x);
3391  if (is_small(size)) {
3392  bindex_t sidx = small_index(size);
3393  sbinptr b = smallbin_at(m, sidx);
3394  if (smallmap_is_marked(m, sidx)) {
3395  mchunkptr p = b;
3396  do {
3397  if (p == x)
3398  return 1;
3399  } while ((p = p->fd) != b);
3400  }
3401  }
3402  else {
3403  bindex_t tidx;
3404  compute_tree_index(size, tidx);
3405  if (treemap_is_marked(m, tidx)) {
3406  tchunkptr t = *treebin_at(m, tidx);
3407  size_t sizebits = size << leftshift_for_tree_index(tidx);
3408  while (t != 0 && chunksize(t) != size) {
3409  t = t->child[(sizebits >> (SIZE_T_BITSIZE-SIZE_T_ONE)) & 1];
3410  sizebits <<= 1;
3411  }
3412  if (t != 0) {
3413  tchunkptr u = t;
3414  do {
3415  if (u == (tchunkptr)x)
3416  return 1;
3417  } while ((u = u->fd) != t);
3418  }
3419  }
3420  }
3421  return 0;
3422 }
3423 
3424 /* Traverse each chunk and check it; return total */
3425 static size_t traverse_and_check(mstate m) {
3426  size_t sum = 0;
3427  if (is_initialized(m)) {
3428  msegmentptr s = &m->seg;
3429  sum += m->topsize + TOP_FOOT_SIZE;
3430  while (s != 0) {
3431  mchunkptr q = align_as_chunk(s->base);
3432  mchunkptr lastq = 0;
3433  assert(pinuse(q));
3434  while (segment_holds(s, q) &&
3435  q != m->top && q->head != FENCEPOST_HEAD) {
3436  sum += chunksize(q);
3437  if (is_inuse(q)) {
3438  assert(!bin_find(m, q));
3439  do_check_inuse_chunk(m, q);
3440  }
3441  else {
3442  assert(q == m->dv || bin_find(m, q));
3443  assert(lastq == 0 || is_inuse(lastq)); /* Not 2 consecutive free */
3444  do_check_free_chunk(m, q);
3445  }
3446  lastq = q;
3447  q = next_chunk(q);
3448  }
3449  s = s->next;
3450  }
3451  }
3452  return sum;
3453 }
3454 
3455 
3456 /* Check all properties of malloc_state. */
3457 static void do_check_malloc_state(mstate m) {
3458  bindex_t i;
3459  size_t total;
3460  /* check bins */
3461  for (i = 0; i < NSMALLBINS; ++i)
3462  do_check_smallbin(m, i);
3463  for (i = 0; i < NTREEBINS; ++i)
3464  do_check_treebin(m, i);
3465 
3466  if (m->dvsize != 0) { /* check dv chunk */
3467  do_check_any_chunk(m, m->dv);
3468  assert(m->dvsize == chunksize(m->dv));
3469  assert(m->dvsize >= MIN_CHUNK_SIZE);
3470  assert(bin_find(m, m->dv) == 0);
3471  }
3472 
3473  if (m->top != 0) { /* check top chunk */
3474  do_check_top_chunk(m, m->top);
3475  /*assert(m->topsize == chunksize(m->top)); redundant */
3476  assert(m->topsize > 0);
3477  assert(bin_find(m, m->top) == 0);
3478  }
3479 
3480  total = traverse_and_check(m);
3481  assert(total <= m->footprint);
3482  assert(m->footprint <= m->max_footprint);
3483 }
3484 #endif /* DEBUG */
3485 
3486 /* ----------------------------- statistics ------------------------------ */
3487 
3488 #if !NO_MALLINFO
3489 static struct mallinfo internal_mallinfo(mstate m) {
3490  struct mallinfo nm = { 0, 0, 0, 0, 0, 0, 0, 0, 0, 0 };
3492  if (!PREACTION(m)) {
3493  check_malloc_state(m);
3494  if (is_initialized(m)) {
3495  size_t nfree = SIZE_T_ONE; /* top always free */
3496  size_t mfree = m->topsize + TOP_FOOT_SIZE;
3497  size_t sum = mfree;
3498  msegmentptr s = &m->seg;
3499  while (s != 0) {
3500  mchunkptr q = align_as_chunk(s->base);
3501  while (segment_holds(s, q) &&
3502  q != m->top && q->head != FENCEPOST_HEAD) {
3503  size_t sz = chunksize(q);
3504  sum += sz;
3505  if (!is_inuse(q)) {
3506  mfree += sz;
3507  ++nfree;
3508  }
3509  q = next_chunk(q);
3510  }
3511  s = s->next;
3512  }
3513 
3514  nm.arena = sum;
3515  nm.ordblks = nfree;
3516  nm.hblkhd = m->footprint - sum;
3517  nm.usmblks = m->max_footprint;
3518  nm.uordblks = m->footprint - mfree;
3519  nm.fordblks = mfree;
3520  nm.keepcost = m->topsize;
3521  }
3522 
3523  POSTACTION(m);
3524  }
3525  return nm;
3526 }
3527 #endif /* !NO_MALLINFO */
3528 
3529 #if !NO_MALLOC_STATS
3530 static void internal_malloc_stats(mstate m) {
3532  if (!PREACTION(m)) {
3533  size_t maxfp = 0;
3534  size_t fp = 0;
3535  size_t used = 0;
3536  check_malloc_state(m);
3537  if (is_initialized(m)) {
3538  msegmentptr s = &m->seg;
3539  maxfp = m->max_footprint;
3540  fp = m->footprint;
3541  used = fp - (m->topsize + TOP_FOOT_SIZE);
3542 
3543  while (s != 0) {
3544  mchunkptr q = align_as_chunk(s->base);
3545  while (segment_holds(s, q) &&
3546  q != m->top && q->head != FENCEPOST_HEAD) {
3547  if (!is_inuse(q))
3548  used -= chunksize(q);
3549  q = next_chunk(q);
3550  }
3551  s = s->next;
3552  }
3553  }
3554  POSTACTION(m); /* drop lock */
3555  fprintf(stderr, "max system bytes = %10lu\n", (unsigned long)(maxfp));
3556  fprintf(stderr, "system bytes = %10lu\n", (unsigned long)(fp));
3557  fprintf(stderr, "in use bytes = %10lu\n", (unsigned long)(used));
3558  }
3559 }
3560 #endif /* NO_MALLOC_STATS */
3561 
3562 /* ----------------------- Operations on smallbins ----------------------- */
3563 
3564 /*
3565  Various forms of linking and unlinking are defined as macros. Even
3566  the ones for trees, which are very long but have very short typical
3567  paths. This is ugly but reduces reliance on inlining support of
3568  compilers.
3569 */
3570 
3571 /* Link a free chunk into a smallbin */
3572 #define insert_small_chunk(M, P, S) {\
3573  bindex_t I = small_index(S);\
3574  mchunkptr B = smallbin_at(M, I);\
3575  mchunkptr F = B;\
3576  assert(S >= MIN_CHUNK_SIZE);\
3577  if (!smallmap_is_marked(M, I))\
3578  mark_smallmap(M, I);\
3579  else if (RTCHECK(ok_address(M, B->fd)))\
3580  F = B->fd;\
3581  else {\
3582  CORRUPTION_ERROR_ACTION(M);\
3583  }\
3584  B->fd = P;\
3585  F->bk = P;\
3586  P->fd = F;\
3587  P->bk = B;\
3588 }
3589 
3590 /* Unlink a chunk from a smallbin */
3591 #define unlink_small_chunk(M, P, S) {\
3592  mchunkptr F = P->fd;\
3593  mchunkptr B = P->bk;\
3594  bindex_t I = small_index(S);\
3595  assert(P != B);\
3596  assert(P != F);\
3597  assert(chunksize(P) == small_index2size(I));\
3598  if (RTCHECK(F == smallbin_at(M,I) || (ok_address(M, F) && F->bk == P))) { \
3599  if (B == F) {\
3600  clear_smallmap(M, I);\
3601  }\
3602  else if (RTCHECK(B == smallbin_at(M,I) ||\
3603  (ok_address(M, B) && B->fd == P))) {\
3604  F->bk = B;\
3605  B->fd = F;\
3606  }\
3607  else {\
3608  CORRUPTION_ERROR_ACTION(M);\
3609  }\
3610  }\
3611  else {\
3612  CORRUPTION_ERROR_ACTION(M);\
3613  }\
3614 }
3615 
3616 /* Unlink the first chunk from a smallbin */
3617 #define unlink_first_small_chunk(M, B, P, I) {\
3618  mchunkptr F = P->fd;\
3619  assert(P != B);\
3620  assert(P != F);\
3621  assert(chunksize(P) == small_index2size(I));\
3622  if (B == F) {\
3623  clear_smallmap(M, I);\
3624  }\
3625  else if (RTCHECK(ok_address(M, F) && F->bk == P)) {\
3626  F->bk = B;\
3627  B->fd = F;\
3628  }\
3629  else {\
3630  CORRUPTION_ERROR_ACTION(M);\
3631  }\
3632 }
3633 
3634 /* Replace dv node, binning the old one */
3635 /* Used only when dvsize known to be small */
3636 #define replace_dv(M, P, S) {\
3637  size_t DVS = M->dvsize;\
3638  assert(is_small(DVS));\
3639  if (DVS != 0) {\
3640  mchunkptr DV = M->dv;\
3641  insert_small_chunk(M, DV, DVS);\
3642  }\
3643  M->dvsize = S;\
3644  M->dv = P;\
3645 }
3646 
3647 /* ------------------------- Operations on trees ------------------------- */
3648 
3649 /* Insert chunk into tree */
3650 #define insert_large_chunk(M, X, S) {\
3651  tbinptr* H;\
3652  bindex_t I;\
3653  compute_tree_index(S, I);\
3654  H = treebin_at(M, I);\
3655  X->index = I;\
3656  X->child[0] = X->child[1] = 0;\
3657  if (!treemap_is_marked(M, I)) {\
3658  mark_treemap(M, I);\
3659  *H = X;\
3660  X->parent = (tchunkptr)H;\
3661  X->fd = X->bk = X;\
3662  }\
3663  else {\
3664  tchunkptr T = *H;\
3665  size_t K = S << leftshift_for_tree_index(I);\
3666  for (;;) {\
3667  if (chunksize(T) != S) {\
3668  tchunkptr* C = &(T->child[(K >> (SIZE_T_BITSIZE-SIZE_T_ONE)) & 1]);\
3669  K <<= 1;\
3670  if (*C != 0)\
3671  T = *C;\
3672  else if (RTCHECK(ok_address(M, C))) {\
3673  *C = X;\
3674  X->parent = T;\
3675  X->fd = X->bk = X;\
3676  break;\
3677  }\
3678  else {\
3679  CORRUPTION_ERROR_ACTION(M);\
3680  break;\
3681  }\
3682  }\
3683  else {\
3684  tchunkptr F = T->fd;\
3685  if (RTCHECK(ok_address(M, T) && ok_address(M, F))) {\
3686  T->fd = F->bk = X;\
3687  X->fd = F;\
3688  X->bk = T;\
3689  X->parent = 0;\
3690  break;\
3691  }\
3692  else {\
3693  CORRUPTION_ERROR_ACTION(M);\
3694  break;\
3695  }\
3696  }\
3697  }\
3698  }\
3699 }
3700 
3701 /*
3702  Unlink steps:
3703 
3704  1. If x is a chained node, unlink it from its same-sized fd/bk links
3705  and choose its bk node as its replacement.
3706  2. If x was the last node of its size, but not a leaf node, it must
3707  be replaced with a leaf node (not merely one with an open left or
3708  right), to make sure that lefts and rights of descendents
3709  correspond properly to bit masks. We use the rightmost descendent
3710  of x. We could use any other leaf, but this is easy to locate and
3711  tends to counteract removal of leftmosts elsewhere, and so keeps
3712  paths shorter than minimally guaranteed. This doesn't loop much
3713  because on average a node in a tree is near the bottom.
3714  3. If x is the base of a chain (i.e., has parent links) relink
3715  x's parent and children to x's replacement (or null if none).
3716 */
3717 
3718 #define unlink_large_chunk(M, X) {\
3719  tchunkptr XP = X->parent;\
3720  tchunkptr R;\
3721  if (X->bk != X) {\
3722  tchunkptr F = X->fd;\
3723  R = X->bk;\
3724  if (RTCHECK(ok_address(M, F) && F->bk == X && R->fd == X)) {\
3725  F->bk = R;\
3726  R->fd = F;\
3727  }\
3728  else {\
3729  CORRUPTION_ERROR_ACTION(M);\
3730  }\
3731  }\
3732  else {\
3733  tchunkptr* RP;\
3734  if (((R = *(RP = &(X->child[1]))) != 0) ||\
3735  ((R = *(RP = &(X->child[0]))) != 0)) {\
3736  tchunkptr* CP;\
3737  while ((*(CP = &(R->child[1])) != 0) ||\
3738  (*(CP = &(R->child[0])) != 0)) {\
3739  R = *(RP = CP);\
3740  }\
3741  if (RTCHECK(ok_address(M, RP)))\
3742  *RP = 0;\
3743  else {\
3744  CORRUPTION_ERROR_ACTION(M);\
3745  }\
3746  }\
3747  }\
3748  if (XP != 0) {\
3749  tbinptr* H = treebin_at(M, X->index);\
3750  if (X == *H) {\
3751  if ((*H = R) == 0) \
3752  clear_treemap(M, X->index);\
3753  }\
3754  else if (RTCHECK(ok_address(M, XP))) {\
3755  if (XP->child[0] == X) \
3756  XP->child[0] = R;\
3757  else \
3758  XP->child[1] = R;\
3759  }\
3760  else\
3761  CORRUPTION_ERROR_ACTION(M);\
3762  if (R != 0) {\
3763  if (RTCHECK(ok_address(M, R))) {\
3764  tchunkptr C0, C1;\
3765  R->parent = XP;\
3766  if ((C0 = X->child[0]) != 0) {\
3767  if (RTCHECK(ok_address(M, C0))) {\
3768  R->child[0] = C0;\
3769  C0->parent = R;\
3770  }\
3771  else\
3772  CORRUPTION_ERROR_ACTION(M);\
3773  }\
3774  if ((C1 = X->child[1]) != 0) {\
3775  if (RTCHECK(ok_address(M, C1))) {\
3776  R->child[1] = C1;\
3777  C1->parent = R;\
3778  }\
3779  else\
3780  CORRUPTION_ERROR_ACTION(M);\
3781  }\
3782  }\
3783  else\
3784  CORRUPTION_ERROR_ACTION(M);\
3785  }\
3786  }\
3787 }
3788 
3789 /* Relays to large vs small bin operations */
3790 
3791 #define insert_chunk(M, P, S)\
3792  if (is_small(S)) insert_small_chunk(M, P, S)\
3793  else { tchunkptr TP = (tchunkptr)(P); insert_large_chunk(M, TP, S); }
3794 
3795 #define unlink_chunk(M, P, S)\
3796  if (is_small(S)) unlink_small_chunk(M, P, S)\
3797  else { tchunkptr TP = (tchunkptr)(P); unlink_large_chunk(M, TP); }
3798 
3799 
3800 /* Relays to internal calls to malloc/free from realloc, memalign etc */
3801 
3802 #if ONLY_MSPACES
3803 #define internal_malloc(m, b) mspace_malloc(m, b)
3804 #define internal_free(m, mem) mspace_free(m,mem);
3805 #else /* ONLY_MSPACES */
3806 #if MSPACES
3807 #define internal_malloc(m, b)\
3808  ((m == gm)? dlmalloc(b) : mspace_malloc(m, b))
3809 #define internal_free(m, mem)\
3810  if (m == gm) dlfree(mem); else mspace_free(m,mem);
3811 #else /* MSPACES */
3812 #define internal_malloc(m, b) dlmalloc(b)
3813 #define internal_free(m, mem) dlfree(mem)
3814 #endif /* MSPACES */
3815 #endif /* ONLY_MSPACES */
3816 
3817 /* ----------------------- Direct-mmapping chunks ----------------------- */
3818 
3819 /*
3820  Directly mmapped chunks are set up with an offset to the start of
3821  the mmapped region stored in the prev_foot field of the chunk. This
3822  allows reconstruction of the required argument to MUNMAP when freed,
3823  and also allows adjustment of the returned chunk to meet alignment
3824  requirements (especially in memalign).
3825 */
3826 
3827 /* Malloc using mmap */
3828 static void* mmap_alloc(mstate m, size_t nb) {
3829  size_t mmsize = mmap_align(nb + SIX_SIZE_T_SIZES + CHUNK_ALIGN_MASK);
3830  if (m->footprint_limit != 0) {
3831  size_t fp = m->footprint + mmsize;
3832  if (fp <= m->footprint || fp > m->footprint_limit)
3833  return 0;
3834  }
3835  if (mmsize > nb) { /* Check for wrap around 0 */
3836  char* mm = (char*)(CALL_DIRECT_MMAP(mmsize));
3837  if (mm != CMFAIL) {
3838  size_t offset = align_offset(chunk2mem(mm));
3839  size_t psize = mmsize - offset - MMAP_FOOT_PAD;
3840  mchunkptr p = (mchunkptr)(mm + offset);
3841  p->prev_foot = offset;
3842  p->head = psize;
3843  mark_inuse_foot(m, p, psize);
3844  chunk_plus_offset(p, psize)->head = FENCEPOST_HEAD;
3845  chunk_plus_offset(p, psize+SIZE_T_SIZE)->head = 0;
3846 
3847  if (m->least_addr == 0 || mm < m->least_addr)
3848  m->least_addr = mm;
3849  if ((m->footprint += mmsize) > m->max_footprint)
3850  m->max_footprint = m->footprint;
3852  check_mmapped_chunk(m, p);
3853  return chunk2mem(p);
3854  }
3855  }
3856  return 0;
3857 }
3858 
3859 /* Realloc using mmap */
3860 static mchunkptr mmap_resize(mstate m, mchunkptr oldp, size_t nb, int flags) {
3861  size_t oldsize = chunksize(oldp);
3862  (void)flags; /* placate people compiling -Wunused */
3863  if (is_small(nb)) /* Can't shrink mmap regions below small size */
3864  return 0;
3865  /* Keep old chunk if big enough but not too big */
3866  if (oldsize >= nb + SIZE_T_SIZE &&
3867  (oldsize - nb) <= (mparams.granularity << 1))
3868  return oldp;
3869  else {
3870  size_t offset = oldp->prev_foot;
3871  size_t oldmmsize = oldsize + offset + MMAP_FOOT_PAD;
3872  size_t newmmsize = mmap_align(nb + SIX_SIZE_T_SIZES + CHUNK_ALIGN_MASK);
3873  char* cp = (char*)CALL_MREMAP((char*)oldp - offset,
3874  oldmmsize, newmmsize, flags);
3875  if (cp != CMFAIL) {
3876  mchunkptr newp = (mchunkptr)(cp + offset);
3877  size_t psize = newmmsize - offset - MMAP_FOOT_PAD;
3878  newp->head = psize;
3879  mark_inuse_foot(m, newp, psize);
3880  chunk_plus_offset(newp, psize)->head = FENCEPOST_HEAD;
3881  chunk_plus_offset(newp, psize+SIZE_T_SIZE)->head = 0;
3882 
3883  if (cp < m->least_addr)
3884  m->least_addr = cp;
3885  if ((m->footprint += newmmsize - oldmmsize) > m->max_footprint)
3886  m->max_footprint = m->footprint;
3887  check_mmapped_chunk(m, newp);
3888  return newp;
3889  }
3890  }
3891  return 0;
3892 }
3893 
3894 
3895 /* -------------------------- mspace management -------------------------- */
3896 
3897 /* Initialize top chunk and its size */
3898 static void init_top(mstate m, mchunkptr p, size_t psize) {
3899  /* Ensure alignment */
3900  size_t offset = align_offset(chunk2mem(p));
3901  p = (mchunkptr)((char*)p + offset);
3902  psize -= offset;
3903 
3904  m->top = p;
3905  m->topsize = psize;
3906  p->head = psize | PINUSE_BIT;
3907  /* set size of fake trailing chunk holding overhead space only once */
3908  chunk_plus_offset(p, psize)->head = TOP_FOOT_SIZE;
3909  m->trim_check = mparams.trim_threshold; /* reset on each update */
3910 }
3911 
3912 /* Initialize bins for a new mstate that is otherwise zeroed out */
3913 static void init_bins(mstate m) {
3914  /* Establish circular links for smallbins */
3915  bindex_t i;
3916  for (i = 0; i < NSMALLBINS; ++i) {
3917  sbinptr bin = smallbin_at(m,i);
3918  bin->fd = bin->bk = bin;
3919  }
3920 }
3921 
3922 #if PROCEED_ON_ERROR
3923 
3924 /* default corruption action */
3925 static void reset_on_error(mstate m) {
3926  int i;
3927  ++malloc_corruption_error_count;
3928  /* Reinitialize fields to forget about all memory */
3929  m->smallmap = m->treemap = 0;
3930  m->dvsize = m->topsize = 0;
3931  m->seg.base = 0;
3932  m->seg.size = 0;
3933  m->seg.next = 0;
3934  m->top = m->dv = 0;
3935  for (i = 0; i < NTREEBINS; ++i)
3936  *treebin_at(m, i) = 0;
3937  init_bins(m);
3938 }
3939 #endif /* PROCEED_ON_ERROR */
3940 
3941 /* Allocate chunk and prepend remainder with chunk in successor base. */
3942 static void* prepend_alloc(mstate m, char* newbase, char* oldbase,
3943  size_t nb) {
3944  mchunkptr p = align_as_chunk(newbase);
3945  mchunkptr oldfirst = align_as_chunk(oldbase);
3946  size_t psize = (char*)oldfirst - (char*)p;
3947  mchunkptr q = chunk_plus_offset(p, nb);
3948  size_t qsize = psize - nb;
3950 
3951  assert((char*)oldfirst > (char*)q);
3952  assert(pinuse(oldfirst));
3953  assert(qsize >= MIN_CHUNK_SIZE);
3954 
3955  /* consolidate remainder with first chunk of old base */
3956  if (oldfirst == m->top) {
3957  size_t tsize = m->topsize += qsize;
3958  m->top = q;
3959  q->head = tsize | PINUSE_BIT;
3960  check_top_chunk(m, q);
3961  }
3962  else if (oldfirst == m->dv) {
3963  size_t dsize = m->dvsize += qsize;
3964  m->dv = q;
3966  }
3967  else {
3968  if (!is_inuse(oldfirst)) {
3969  size_t nsize = chunksize(oldfirst);
3970  unlink_chunk(m, oldfirst, nsize);
3971  oldfirst = chunk_plus_offset(oldfirst, nsize);
3972  qsize += nsize;
3973  }
3974  set_free_with_pinuse(q, qsize, oldfirst);
3975  insert_chunk(m, q, qsize);
3976  check_free_chunk(m, q);
3977  }
3978 
3979  check_malloced_chunk(m, chunk2mem(p), nb);
3980  return chunk2mem(p);
3981 }
3982 
3983 /* Add a segment to hold a new noncontiguous region */
3984 static void add_segment(mstate m, char* tbase, size_t tsize, flag_t mmapped) {
3985  /* Determine locations and sizes of segment, fenceposts, old top */
3986  char* old_top = (char*)m->top;
3987  msegmentptr oldsp = segment_holding(m, old_top);
3988  char* old_end = oldsp->base + oldsp->size;
3989  size_t ssize = pad_request(sizeof(struct malloc_segment));
3990  char* rawsp = old_end - (ssize + FOUR_SIZE_T_SIZES + CHUNK_ALIGN_MASK);
3991  size_t offset = align_offset(chunk2mem(rawsp));
3992  char* asp = rawsp + offset;
3993  char* csp = (asp < (old_top + MIN_CHUNK_SIZE))? old_top : asp;
3994  mchunkptr sp = (mchunkptr)csp;
3995  msegmentptr ss = (msegmentptr)(chunk2mem(sp));
3996  mchunkptr tnext = chunk_plus_offset(sp, ssize);
3997  mchunkptr p = tnext;
3998  int nfences = 0;
3999 
4000  /* reset top to new space */
4001  init_top(m, (mchunkptr)tbase, tsize - TOP_FOOT_SIZE);
4002 
4003  /* Set up segment record */
4004  assert(is_aligned(ss));
4005  set_size_and_pinuse_of_inuse_chunk(m, sp, ssize);
4006  *ss = m->seg; /* Push current record */
4007  m->seg.base = tbase;
4008  m->seg.size = tsize;
4009  m->seg.sflags = mmapped;
4010  m->seg.next = ss;
4011 
4012  /* Insert trailing fenceposts */
4013  for (;;) {
4014  mchunkptr nextp = chunk_plus_offset(p, SIZE_T_SIZE);
4015  p->head = FENCEPOST_HEAD;
4016  ++nfences;
4017  if ((char*)(&(nextp->head)) < old_end)
4018  p = nextp;
4019  else
4020  break;
4021  }
4022  assert(nfences >= 2);
4023 
4024  /* Insert the rest of old top into a bin as an ordinary free chunk */
4025  if (csp != old_top) {
4026  mchunkptr q = (mchunkptr)old_top;
4027  size_t psize = csp - old_top;
4028  mchunkptr tn = chunk_plus_offset(q, psize);
4029  set_free_with_pinuse(q, psize, tn);
4030  insert_chunk(m, q, psize);
4031  }
4032 
4033  check_top_chunk(m, m->top);
4034 }
4035 
4036 /* -------------------------- System allocation -------------------------- */
4037 
4038 /* Get memory from system using MORECORE or MMAP */
4039 static void* sys_alloc(mstate m, size_t nb) {
4040  char* tbase = CMFAIL;
4041  size_t tsize = 0;
4042  flag_t mmap_flag = 0;
4043  size_t asize; /* allocation size */
4044 
4046 
4047  /* Directly map large chunks, but only if already initialized */
4048  if (use_mmap(m) && nb >= mparams.mmap_threshold && m->topsize != 0) {
4049  void* mem = mmap_alloc(m, nb);
4050  if (mem != 0)
4051  return mem;
4052  }
4053 
4054  asize = granularity_align(nb + SYS_ALLOC_PADDING);
4055  if (asize <= nb)
4056  return 0; /* wraparound */
4057  if (m->footprint_limit != 0) {
4058  size_t fp = m->footprint + asize;
4059  if (fp <= m->footprint || fp > m->footprint_limit)
4060  return 0;
4061  }
4062 
4063  /*
4064  Try getting memory in any of three ways (in most-preferred to
4065  least-preferred order):
4066  1. A call to MORECORE that can normally contiguously extend memory.
4067  (disabled if not MORECORE_CONTIGUOUS or not HAVE_MORECORE or
4068  or main space is mmapped or a previous contiguous call failed)
4069  2. A call to MMAP new space (disabled if not HAVE_MMAP).
4070  Note that under the default settings, if MORECORE is unable to
4071  fulfill a request, and HAVE_MMAP is true, then mmap is
4072  used as a noncontiguous system allocator. This is a useful backup
4073  strategy for systems with holes in address spaces -- in this case
4074  sbrk cannot contiguously expand the heap, but mmap may be able to
4075  find space.
4076  3. A call to MORECORE that cannot usually contiguously extend memory.
4077  (disabled if not HAVE_MORECORE)
4078 
4079  In all cases, we need to request enough bytes from system to ensure
4080  we can malloc nb bytes upon success, so pad with enough space for
4081  top_foot, plus alignment-pad to make sure we don't lose bytes if
4082  not on boundary, and round this up to a granularity unit.
4083  */
4084 
4086  char* br = CMFAIL;
4087  size_t ssize = asize; /* sbrk call size */
4088  msegmentptr ss = (m->top == 0)? 0 : segment_holding(m, (char*)m->top);
4090 
4091  if (ss == 0) { /* First time through or recovery */
4092  char* base = (char*)CALL_MORECORE(0);
4093  if (base != CMFAIL) {
4094  size_t fp;
4095  /* Adjust to end on a page boundary */
4096  if (!is_page_aligned(base))
4097  ssize += (page_align((size_t)base) - (size_t)base);
4098  fp = m->footprint + ssize; /* recheck limits */
4099  if (ssize > nb && ssize < HALF_MAX_SIZE_T &&
4100  (m->footprint_limit == 0 ||
4101  (fp > m->footprint && fp <= m->footprint_limit)) &&
4102  (br = (char*)(CALL_MORECORE(ssize))) == base) {
4103  tbase = base;
4104  tsize = ssize;
4105  }
4106  }
4107  }
4108  else {
4109  /* Subtract out existing available top space from MORECORE request. */
4110  ssize = granularity_align(nb - m->topsize + SYS_ALLOC_PADDING);
4111  /* Use mem here only if it did continuously extend old space */
4112  if (ssize < HALF_MAX_SIZE_T &&
4113  (br = (char*)(CALL_MORECORE(ssize))) == ss->base+ss->size) {
4114  tbase = br;
4115  tsize = ssize;
4116  }
4117  }
4118 
4119  if (tbase == CMFAIL) { /* Cope with partial failure */
4120  if (br != CMFAIL) { /* Try to use/extend the space we did get */
4121  if (ssize < HALF_MAX_SIZE_T &&
4122  ssize < nb + SYS_ALLOC_PADDING) {
4123  size_t esize = granularity_align(nb + SYS_ALLOC_PADDING - ssize);
4124  if (esize < HALF_MAX_SIZE_T) {
4125  char* end = (char*)CALL_MORECORE(esize);
4126  if (end != CMFAIL)
4127  ssize += esize;
4128  else { /* Can't use; try to release */
4129  (void) CALL_MORECORE(-ssize);
4130  br = CMFAIL;
4131  }
4132  }
4133  }
4134  }
4135  if (br != CMFAIL) { /* Use the space we did get */
4136  tbase = br;
4137  tsize = ssize;
4138  }
4139  else
4140  disable_contiguous(m); /* Don't try contiguous path in the future */
4141  }
4142 
4144  }
4145 
4146  if (HAVE_MMAP && tbase == CMFAIL) { /* Try MMAP */
4147  char* mp = (char*)(CALL_MMAP(asize));
4148  if (mp != CMFAIL) {
4149  tbase = mp;
4150  tsize = asize;
4151  mmap_flag = USE_MMAP_BIT;
4152  }
4153  }
4154 
4155  if (HAVE_MORECORE && tbase == CMFAIL) { /* Try noncontiguous MORECORE */
4156  if (asize < HALF_MAX_SIZE_T) {
4157  char* br = CMFAIL;
4158  char* end = CMFAIL;
4160  br = (char*)(CALL_MORECORE(asize));
4161  end = (char*)(CALL_MORECORE(0));
4163  if (br != CMFAIL && end != CMFAIL && br < end) {
4164  size_t ssize = end - br;
4165  if (ssize > nb + TOP_FOOT_SIZE) {
4166  tbase = br;
4167  tsize = ssize;
4168  }
4169  }
4170  }
4171  }
4172 
4173  if (tbase != CMFAIL) {
4174 
4175  if ((m->footprint += tsize) > m->max_footprint)
4176  m->max_footprint = m->footprint;
4177 
4178  if (!is_initialized(m)) { /* first-time initialization */
4179  if (m->least_addr == 0 || tbase < m->least_addr)
4180  m->least_addr = tbase;
4181  m->seg.base = tbase;
4182  m->seg.size = tsize;
4183  m->seg.sflags = mmap_flag;
4184  m->magic = mparams.magic;
4186  init_bins(m);
4187 #if !ONLY_MSPACES
4188  if (is_global(m))
4189  init_top(m, (mchunkptr)tbase, tsize - TOP_FOOT_SIZE);
4190  else
4191 #endif
4192  {
4193  /* Offset top by embedded malloc_state */
4194  mchunkptr mn = next_chunk(mem2chunk(m));
4195  init_top(m, mn, (size_t)((tbase + tsize) - (char*)mn) -TOP_FOOT_SIZE);
4196  }
4197  }
4198 
4199  else {
4200  /* Try to merge with an existing segment */
4201  msegmentptr sp = &m->seg;
4202  /* Only consider most recent segment if traversal suppressed */
4203  while (sp != 0 && tbase != sp->base + sp->size)
4204  sp = (NO_SEGMENT_TRAVERSAL) ? 0 : sp->next;
4205  if (sp != 0 &&
4206  !is_extern_segment(sp) &&
4207  (sp->sflags & USE_MMAP_BIT) == mmap_flag &&
4208  segment_holds(sp, m->top)) { /* append */
4209  sp->size += tsize;
4210  init_top(m, m->top, m->topsize + tsize);
4211  }
4212  else {
4213  if (tbase < m->least_addr)
4214  m->least_addr = tbase;
4215  sp = &m->seg;
4216  while (sp != 0 && sp->base != tbase + tsize)
4217  sp = (NO_SEGMENT_TRAVERSAL) ? 0 : sp->next;
4218  if (sp != 0 &&
4219  !is_extern_segment(sp) &&
4220  (sp->sflags & USE_MMAP_BIT) == mmap_flag) {
4221  char* oldbase = sp->base;
4222  sp->base = tbase;
4223  sp->size += tsize;
4224  return prepend_alloc(m, tbase, oldbase, nb);
4225  }
4226  else
4227  add_segment(m, tbase, tsize, mmap_flag);
4228  }
4229  }
4230 
4231  if (nb < m->topsize) { /* Allocate from new or extended top space */
4232  size_t rsize = m->topsize -= nb;
4233  mchunkptr p = m->top;
4234  mchunkptr r = m->top = chunk_plus_offset(p, nb);
4235  r->head = rsize | PINUSE_BIT;
4237  check_top_chunk(m, m->top);
4238  check_malloced_chunk(m, chunk2mem(p), nb);
4239  return chunk2mem(p);
4240  }
4241  }
4242 
4244  return 0;
4245 }
4246 
4247 /* ----------------------- system deallocation -------------------------- */
4248 
4249 /* Unmap and unlink any mmapped segments that don't contain used chunks */
4250 static size_t release_unused_segments(mstate m) {
4251  size_t released = 0;
4252  int nsegs = 0;
4253  msegmentptr pred = &m->seg;
4254  msegmentptr sp = pred->next;
4255  while (sp != 0) {
4256  char* base = sp->base;
4257  size_t size = sp->size;
4258  msegmentptr next = sp->next;
4259  ++nsegs;
4260  if (is_mmapped_segment(sp) && !is_extern_segment(sp)) {
4261  mchunkptr p = align_as_chunk(base);
4262  size_t psize = chunksize(p);
4263  /* Can unmap if first chunk holds entire segment and not pinned */
4264  if (!is_inuse(p) && (char*)p + psize >= base + size - TOP_FOOT_SIZE) {
4265  tchunkptr tp = (tchunkptr)p;
4266  assert(segment_holds(sp, (char*)sp));
4267  if (p == m->dv) {
4268  m->dv = 0;
4269  m->dvsize = 0;
4270  }
4271  else {
4272  unlink_large_chunk(m, tp);
4273  }
4274  if (CALL_MUNMAP(base, size) == 0) {
4275  released += size;
4276  m->footprint -= size;
4277  /* unlink obsoleted record */
4278  sp = pred;
4279  sp->next = next;
4280  }
4281  else { /* back out if cannot unmap */
4282  insert_large_chunk(m, tp, psize);
4283  }
4284  }
4285  }
4286  if (NO_SEGMENT_TRAVERSAL) /* scan only first segment */
4287  break;
4288  pred = sp;
4289  sp = next;
4290  }
4291  /* Reset check counter */
4292  m->release_checks = (((size_t) nsegs > (size_t) MAX_RELEASE_CHECK_RATE)?
4293  (size_t) nsegs : (size_t) MAX_RELEASE_CHECK_RATE);
4294  return released;
4295 }
4296 
4297 static int sys_trim(mstate m, size_t pad) {
4298  size_t released = 0;
4300  if (pad < MAX_REQUEST && is_initialized(m)) {
4301  pad += TOP_FOOT_SIZE; /* ensure enough room for segment overhead */
4302 
4303  if (m->topsize > pad) {
4304  /* Shrink top space in granularity-size units, keeping at least one */
4305  size_t unit = mparams.granularity;
4306  size_t extra = ((m->topsize - pad + (unit - SIZE_T_ONE)) / unit -
4307  SIZE_T_ONE) * unit;
4308  msegmentptr sp = segment_holding(m, (char*)m->top);
4309 
4310  if (!is_extern_segment(sp)) {
4311  if (is_mmapped_segment(sp)) {
4312  if (HAVE_MMAP &&
4313  sp->size >= extra &&
4314  !has_segment_link(m, sp)) { /* can't shrink if pinned */
4315  size_t newsize = sp->size - extra;
4316  (void)newsize; /* placate people compiling -Wunused-variable */
4317  /* Prefer mremap, fall back to munmap */
4318  if ((CALL_MREMAP(sp->base, sp->size, newsize, 0) != MFAIL) ||
4319  (CALL_MUNMAP(sp->base + newsize, extra) == 0)) {
4320  released = extra;
4321  }
4322  }
4323  }
4324  else if (HAVE_MORECORE) {
4325  if (extra >= HALF_MAX_SIZE_T) /* Avoid wrapping negative */
4326  extra = (HALF_MAX_SIZE_T) + SIZE_T_ONE - unit;
4328  {
4329  /* Make sure end of memory is where we last set it. */
4330  char* old_br = (char*)(CALL_MORECORE(0));
4331  if (old_br == sp->base + sp->size) {
4332  char* rel_br = (char*)(CALL_MORECORE(-extra));
4333  char* new_br = (char*)(CALL_MORECORE(0));
4334  if (rel_br != CMFAIL && new_br < old_br)
4335  released = old_br - new_br;
4336  }
4337  }
4339  }
4340  }
4341 
4342  if (released != 0) {
4343  sp->size -= released;
4344  m->footprint -= released;
4345  init_top(m, m->top, m->topsize - released);
4346  check_top_chunk(m, m->top);
4347  }
4348  }
4349 
4350  /* Unmap any unused mmapped segments */
4351  if (HAVE_MMAP)
4352  released += release_unused_segments(m);
4353 
4354  /* On failure, disable autotrim to avoid repeated failed future calls */
4355  if (released == 0 && m->topsize > m->trim_check)
4356  m->trim_check = MAX_SIZE_T;
4357  }
4358 
4359  return (released != 0)? 1 : 0;
4360 }
4361 
4362 /* Consolidate and bin a chunk. Differs from exported versions
4363  of free mainly in that the chunk need not be marked as inuse.
4364 */
4365 static void dispose_chunk(mstate m, mchunkptr p, size_t psize) {
4366  mchunkptr next = chunk_plus_offset(p, psize);
4367  if (!pinuse(p)) {
4368  mchunkptr prev;
4369  size_t prevsize = p->prev_foot;
4370  if (is_mmapped(p)) {
4371  psize += prevsize + MMAP_FOOT_PAD;
4372  if (CALL_MUNMAP((char*)p - prevsize, psize) == 0)
4373  m->footprint -= psize;
4374  return;
4375  }
4376  prev = chunk_minus_offset(p, prevsize);
4377  psize += prevsize;
4378  p = prev;
4379  if (RTCHECK(ok_address(m, prev))) { /* consolidate backward */
4380  if (p != m->dv) {
4381  unlink_chunk(m, p, prevsize);
4382  }
4383  else if ((next->head & INUSE_BITS) == INUSE_BITS) {
4384  m->dvsize = psize;
4385  set_free_with_pinuse(p, psize, next);
4386  return;
4387  }
4388  }
4389  else {
4391  return;
4392  }
4393  }
4394  if (RTCHECK(ok_address(m, next))) {
4395  if (!cinuse(next)) { /* consolidate forward */
4396  if (next == m->top) {
4397  size_t tsize = m->topsize += psize;
4398  m->top = p;
4399  p->head = tsize | PINUSE_BIT;
4400  if (p == m->dv) {
4401  m->dv = 0;
4402  m->dvsize = 0;
4403  }
4404  return;
4405  }
4406  else if (next == m->dv) {
4407  size_t dsize = m->dvsize += psize;
4408  m->dv = p;
4410  return;
4411  }
4412  else {
4413  size_t nsize = chunksize(next);
4414  psize += nsize;
4415  unlink_chunk(m, next, nsize);
4417  if (p == m->dv) {
4418  m->dvsize = psize;
4419  return;
4420  }
4421  }
4422  }
4423  else {
4424  set_free_with_pinuse(p, psize, next);
4425  }
4426  insert_chunk(m, p, psize);
4427  }
4428  else {
4430  }
4431 }
4432 
4433 /* ---------------------------- malloc --------------------------- */
4434 
4435 /* allocate a large request from the best fitting chunk in a treebin */
4436 static void* tmalloc_large(mstate m, size_t nb) {
4437  tchunkptr v = 0;
4438  size_t rsize = -nb; /* Unsigned negation */
4439  tchunkptr t;
4440  bindex_t idx;
4441  compute_tree_index(nb, idx);
4442  if ((t = *treebin_at(m, idx)) != 0) {
4443  /* Traverse tree for this bin looking for node with size == nb */
4444  size_t sizebits = nb << leftshift_for_tree_index(idx);
4445  tchunkptr rst = 0; /* The deepest untaken right subtree */
4446  for (;;) {
4447  tchunkptr rt;
4448  size_t trem = chunksize(t) - nb;
4449  if (trem < rsize) {
4450  v = t;
4451  if ((rsize = trem) == 0)
4452  break;
4453  }
4454  rt = t->child[1];
4455  t = t->child[(sizebits >> (SIZE_T_BITSIZE-SIZE_T_ONE)) & 1];
4456  if (rt != 0 && rt != t)
4457  rst = rt;
4458  if (t == 0) {
4459  t = rst; /* set t to least subtree holding sizes > nb */
4460  break;
4461  }
4462  sizebits <<= 1;
4463  }
4464  }
4465  if (t == 0 && v == 0) { /* set t to root of next non-empty treebin */
4466  binmap_t leftbits = left_bits(idx2bit(idx)) & m->treemap;
4467  if (leftbits != 0) {
4468  bindex_t i;
4469  binmap_t leastbit = least_bit(leftbits);
4470  compute_bit2idx(leastbit, i);
4471  t = *treebin_at(m, i);
4472  }
4473  }
4474 
4475  while (t != 0) { /* find smallest of tree or subtree */
4476  size_t trem = chunksize(t) - nb;
4477  if (trem < rsize) {
4478  rsize = trem;
4479  v = t;
4480  }
4481  t = leftmost_child(t);
4482  }
4483 
4484  /* If dv is a better fit, return 0 so malloc will use it */
4485  if (v != 0 && rsize < (size_t)(m->dvsize - nb)) {
4486  if (RTCHECK(ok_address(m, v))) { /* split */
4487  mchunkptr r = chunk_plus_offset(v, nb);
4488  assert(chunksize(v) == rsize + nb);
4489  if (RTCHECK(ok_next(v, r))) {
4490  unlink_large_chunk(m, v);
4491  if (rsize < MIN_CHUNK_SIZE)
4492  set_inuse_and_pinuse(m, v, (rsize + nb));
4493  else {
4496  insert_chunk(m, r, rsize);
4497  }
4498  return chunk2mem(v);
4499  }
4500  }
4502  }
4503  return 0;
4504 }
4505 
4506 /* allocate a small request from the best fitting chunk in a treebin */
4507 static void* tmalloc_small(mstate m, size_t nb) {
4508  tchunkptr t, v;
4509  size_t rsize;
4510  bindex_t i;
4511  binmap_t leastbit = least_bit(m->treemap);
4512  compute_bit2idx(leastbit, i);
4513  v = t = *treebin_at(m, i);
4514  rsize = chunksize(t) - nb;
4515 
4516  while ((t = leftmost_child(t)) != 0) {
4517  size_t trem = chunksize(t) - nb;
4518  if (trem < rsize) {
4519  rsize = trem;
4520  v = t;
4521  }
4522  }
4523 
4524  if (RTCHECK(ok_address(m, v))) {
4525  mchunkptr r = chunk_plus_offset(v, nb);
4526  assert(chunksize(v) == rsize + nb);
4527  if (RTCHECK(ok_next(v, r))) {
4528  unlink_large_chunk(m, v);
4529  if (rsize < MIN_CHUNK_SIZE)
4530  set_inuse_and_pinuse(m, v, (rsize + nb));
4531  else {
4534  replace_dv(m, r, rsize);
4535  }
4536  return chunk2mem(v);
4537  }
4538  }
4539 
4541  return 0;
4542 }
4543 
4544 #if !ONLY_MSPACES
4545 
4546 void* dlmalloc(size_t bytes) {
4547  /*
4548  Basic algorithm:
4549  If a small request (< 256 bytes minus per-chunk overhead):
4550  1. If one exists, use a remainderless chunk in associated smallbin.
4551  (Remainderless means that there are too few excess bytes to
4552  represent as a chunk.)
4553  2. If it is big enough, use the dv chunk, which is normally the
4554  chunk adjacent to the one used for the most recent small request.
4555  3. If one exists, split the smallest available chunk in a bin,
4556  saving remainder in dv.
4557  4. If it is big enough, use the top chunk.
4558  5. If available, get memory from system and use it
4559  Otherwise, for a large request:
4560  1. Find the smallest available binned chunk that fits, and use it
4561  if it is better fitting than dv chunk, splitting if necessary.
4562  2. If better fitting than any binned chunk, use the dv chunk.
4563  3. If it is big enough, use the top chunk.
4564  4. If request size >= mmap threshold, try to directly mmap this chunk.
4565  5. If available, get memory from system and use it
4566 
4567  The ugly goto's here ensure that postaction occurs along all paths.
4568  */
4569 
4570 #if USE_LOCKS
4571  ensure_initialization(); /* initialize in sys_alloc if not using locks */
4572 #endif
4573 
4574  if (!PREACTION(gm)) {
4575  void* mem;
4576  size_t nb;
4577  if (bytes <= MAX_SMALL_REQUEST) {
4578  bindex_t idx;
4579  binmap_t smallbits;
4580  nb = (bytes < MIN_REQUEST)? MIN_CHUNK_SIZE : pad_request(bytes);
4581  idx = small_index(nb);
4582  smallbits = gm->smallmap >> idx;
4583 
4584  if ((smallbits & 0x3U) != 0) { /* Remainderless fit to a smallbin. */
4585  mchunkptr b, p;
4586  idx += ~smallbits & 1; /* Uses next bin if idx empty */
4587  b = smallbin_at(gm, idx);
4588  p = b->fd;
4589  assert(chunksize(p) == small_index2size(idx));
4590  unlink_first_small_chunk(gm, b, p, idx);
4592  mem = chunk2mem(p);
4593  check_malloced_chunk(gm, mem, nb);
4594  goto postaction;
4595  }
4596 
4597  else if (nb > gm->dvsize) {
4598  if (smallbits != 0) { /* Use chunk in next nonempty smallbin */
4599  mchunkptr b, p, r;
4600  size_t rsize;
4601  bindex_t i;
4602  binmap_t leftbits = (smallbits << idx) & left_bits(idx2bit(idx));
4603  binmap_t leastbit = least_bit(leftbits);
4604  compute_bit2idx(leastbit, i);
4605  b = smallbin_at(gm, i);
4606  p = b->fd;
4607  assert(chunksize(p) == small_index2size(i));
4608  unlink_first_small_chunk(gm, b, p, i);
4609  rsize = small_index2size(i) - nb;
4610  /* Fit here cannot be remainderless if 4byte sizes */
4611  if (SIZE_T_SIZE != 4 && rsize < MIN_CHUNK_SIZE)
4613  else {
4615  r = chunk_plus_offset(p, nb);
4617  replace_dv(gm, r, rsize);
4618  }
4619  mem = chunk2mem(p);
4620  check_malloced_chunk(gm, mem, nb);
4621  goto postaction;
4622  }
4623 
4624  else if (gm->treemap != 0 && (mem = tmalloc_small(gm, nb)) != 0) {
4625  check_malloced_chunk(gm, mem, nb);
4626  goto postaction;
4627  }
4628  }
4629  }
4630  else if (bytes >= MAX_REQUEST)
4631  nb = MAX_SIZE_T; /* Too big to allocate. Force failure (in sys alloc) */
4632  else {
4633  nb = pad_request(bytes);
4634  if (gm->treemap != 0 && (mem = tmalloc_large(gm, nb)) != 0) {
4635  check_malloced_chunk(gm, mem, nb);
4636  goto postaction;
4637  }
4638  }
4639 
4640  if (nb <= gm->dvsize) {
4641  size_t rsize = gm->dvsize - nb;
4642  mchunkptr p = gm->dv;
4643  if (rsize >= MIN_CHUNK_SIZE) { /* split dv */
4644  mchunkptr r = gm->dv = chunk_plus_offset(p, nb);
4645  gm->dvsize = rsize;
4648  }
4649  else { /* exhaust dv */
4650  size_t dvs = gm->dvsize;
4651  gm->dvsize = 0;
4652  gm->dv = 0;
4653  set_inuse_and_pinuse(gm, p, dvs);
4654  }
4655  mem = chunk2mem(p);
4656  check_malloced_chunk(gm, mem, nb);
4657  goto postaction;
4658  }
4659 
4660  else if (nb < gm->topsize) { /* Split top */
4661  size_t rsize = gm->topsize -= nb;
4662  mchunkptr p = gm->top;
4663  mchunkptr r = gm->top = chunk_plus_offset(p, nb);
4664  r->head = rsize | PINUSE_BIT;
4666  mem = chunk2mem(p);
4667  check_top_chunk(gm, gm->top);
4668  check_malloced_chunk(gm, mem, nb);
4669  goto postaction;
4670  }
4671 
4672  mem = sys_alloc(gm, nb);
4673 
4674  postaction:
4675  POSTACTION(gm);
4676  return mem;
4677  }
4678 
4679  return 0;
4680 }
4681 
4682 /* ---------------------------- free --------------------------- */
4683 
4684 void dlfree(void* mem) {
4685  /*
4686  Consolidate freed chunks with preceeding or succeeding bordering
4687  free chunks, if they exist, and then place in a bin. Intermixed
4688  with special cases for top, dv, mmapped chunks, and usage errors.
4689  */
4690 
4691  if (mem != 0) {
4692  mchunkptr p = mem2chunk(mem);
4693 #if FOOTERS
4694  mstate fm = get_mstate_for(p);
4695  if (!ok_magic(fm)) {
4696  USAGE_ERROR_ACTION(fm, p);
4697  return;
4698  }
4699 #else /* FOOTERS */
4700 #define fm gm
4701 #endif /* FOOTERS */
4702  if (!PREACTION(fm)) {
4703  check_inuse_chunk(fm, p);
4704  if (RTCHECK(ok_address(fm, p) && ok_inuse(p))) {
4705  size_t psize = chunksize(p);
4706  mchunkptr next = chunk_plus_offset(p, psize);
4707  if (!pinuse(p)) {
4708  size_t prevsize = p->prev_foot;
4709  if (is_mmapped(p)) {
4710  psize += prevsize + MMAP_FOOT_PAD;
4711  if (CALL_MUNMAP((char*)p - prevsize, psize) == 0)
4712  fm->footprint -= psize;
4713  goto postaction;
4714  }
4715  else {
4716  mchunkptr prev = chunk_minus_offset(p, prevsize);
4717  psize += prevsize;
4718  p = prev;
4719  if (RTCHECK(ok_address(fm, prev))) { /* consolidate backward */
4720  if (p != fm->dv) {
4721  unlink_chunk(fm, p, prevsize);
4722  }
4723  else if ((next->head & INUSE_BITS) == INUSE_BITS) {
4724  fm->dvsize = psize;
4725  set_free_with_pinuse(p, psize, next);
4726  goto postaction;
4727  }
4728  }
4729  else
4730  goto erroraction;
4731  }
4732  }
4733 
4734  if (RTCHECK(ok_next(p, next) && ok_pinuse(next))) {
4735  if (!cinuse(next)) { /* consolidate forward */
4736  if (next == fm->top) {
4737  size_t tsize = fm->topsize += psize;
4738  fm->top = p;
4739  p->head = tsize | PINUSE_BIT;
4740  if (p == fm->dv) {
4741  fm->dv = 0;
4742  fm->dvsize = 0;
4743  }
4744  if (should_trim(fm, tsize))
4745  sys_trim(fm, 0);
4746  goto postaction;
4747  }
4748  else if (next == fm->dv) {
4749  size_t dsize = fm->dvsize += psize;
4750  fm->dv = p;
4752  goto postaction;
4753  }
4754  else {
4755  size_t nsize = chunksize(next);
4756  psize += nsize;
4757  unlink_chunk(fm, next, nsize);
4759  if (p == fm->dv) {
4760  fm->dvsize = psize;
4761  goto postaction;
4762  }
4763  }
4764  }
4765  else
4766  set_free_with_pinuse(p, psize, next);
4767 
4768  if (is_small(psize)) {
4769  insert_small_chunk(fm, p, psize);
4770  check_free_chunk(fm, p);
4771  }
4772  else {
4773  tchunkptr tp = (tchunkptr)p;
4774  insert_large_chunk(fm, tp, psize);
4775  check_free_chunk(fm, p);
4776  if (--fm->release_checks == 0)
4777  release_unused_segments(fm);
4778  }
4779  goto postaction;
4780  }
4781  }
4782  erroraction:
4783  USAGE_ERROR_ACTION(fm, p);
4784  postaction:
4785  POSTACTION(fm);
4786  }
4787  }
4788 #if !FOOTERS
4789 #undef fm
4790 #endif /* FOOTERS */
4791 }
4792 
4793 void* dlcalloc(size_t n_elements, size_t elem_size) {
4794  void* mem;
4795  size_t req = 0;
4796  if (n_elements != 0) {
4797  req = n_elements * elem_size;
4798  if (((n_elements | elem_size) & ~(size_t)0xffff) &&
4799  (req / n_elements != elem_size))
4800  req = MAX_SIZE_T; /* force downstream failure on overflow */
4801  }
4802  mem = dlmalloc(req);
4803  if (mem != 0 && calloc_must_clear(mem2chunk(mem)))
4804  memset(mem, 0, req);
4805  return mem;
4806 }
4807 
4808 #endif /* !ONLY_MSPACES */
4809 
4810 /* ------------ Internal support for realloc, memalign, etc -------------- */
4811 
4812 /* Try to realloc; only in-place unless can_move true */
4813 static mchunkptr try_realloc_chunk(mstate m, mchunkptr p, size_t nb,
4814  int can_move) {
4815  mchunkptr newp = 0;
4816  size_t oldsize = chunksize(p);
4817  mchunkptr next = chunk_plus_offset(p, oldsize);
4818  if (RTCHECK(ok_address(m, p) && ok_inuse(p) &&
4819  ok_next(p, next) && ok_pinuse(next))) {
4820  if (is_mmapped(p)) {
4821  newp = mmap_resize(m, p, nb, can_move);
4822  }
4823  else if (oldsize >= nb) { /* already big enough */
4824  size_t rsize = oldsize - nb;
4825  if (rsize >= MIN_CHUNK_SIZE) { /* split off remainder */
4826  mchunkptr r = chunk_plus_offset(p, nb);
4827  set_inuse(m, p, nb);
4828  set_inuse(m, r, rsize);
4829  dispose_chunk(m, r, rsize);
4830  }
4831  newp = p;
4832  }
4833  else if (next == m->top) { /* extend into top */
4834  if (oldsize + m->topsize > nb) {
4835  size_t newsize = oldsize + m->topsize;
4836  size_t newtopsize = newsize - nb;
4837  mchunkptr newtop = chunk_plus_offset(p, nb);
4838  set_inuse(m, p, nb);
4839  newtop->head = newtopsize |PINUSE_BIT;
4840  m->top = newtop;
4841  m->topsize = newtopsize;
4842  newp = p;
4843  }
4844  }
4845  else if (next == m->dv) { /* extend into dv */
4846  size_t dvs = m->dvsize;
4847  if (oldsize + dvs >= nb) {
4848  size_t dsize = oldsize + dvs - nb;
4849  if (dsize >= MIN_CHUNK_SIZE) {
4850  mchunkptr r = chunk_plus_offset(p, nb);
4851  mchunkptr n = chunk_plus_offset(r, dsize);
4852  set_inuse(m, p, nb);
4854  clear_pinuse(n);
4855  m->dvsize = dsize;
4856  m->dv = r;
4857  }
4858  else { /* exhaust dv */
4859  size_t newsize = oldsize + dvs;
4860  set_inuse(m, p, newsize);
4861  m->dvsize = 0;
4862  m->dv = 0;
4863  }
4864  newp = p;
4865  }
4866  }
4867  else if (!cinuse(next)) { /* extend into next free chunk */
4868  size_t nextsize = chunksize(next);
4869  if (oldsize + nextsize >= nb) {
4870  size_t rsize = oldsize + nextsize - nb;
4871  unlink_chunk(m, next, nextsize);
4872  if (rsize < MIN_CHUNK_SIZE) {
4873  size_t newsize = oldsize + nextsize;
4874  set_inuse(m, p, newsize);
4875  }
4876  else {
4877  mchunkptr r = chunk_plus_offset(p, nb);
4878  set_inuse(m, p, nb);
4879  set_inuse(m, r, rsize);
4880  dispose_chunk(m, r, rsize);
4881  }
4882  newp = p;
4883  }
4884  }
4885  }
4886  else {
4888  }
4889  return newp;
4890 }
4891 
4892 static void* internal_memalign(mstate m, size_t alignment, size_t bytes) {
4893  void* mem = 0;
4894  if (alignment < MIN_CHUNK_SIZE) /* must be at least a minimum chunk size */
4895  alignment = MIN_CHUNK_SIZE;
4896  if ((alignment & (alignment-SIZE_T_ONE)) != 0) {/* Ensure a power of 2 */
4897  size_t a = MALLOC_ALIGNMENT << 1;
4898  while (a < alignment) a <<= 1;
4899  alignment = a;
4900  }
4901  if (bytes >= MAX_REQUEST - alignment) {
4902  if (m != 0) { /* Test isn't needed but avoids compiler warning */
4904  }
4905  }
4906  else {
4907  size_t nb = request2size(bytes);
4908  size_t req = nb + alignment + MIN_CHUNK_SIZE - CHUNK_OVERHEAD;
4909  mem = internal_malloc(m, req);
4910  if (mem != 0) {
4911  mchunkptr p = mem2chunk(mem);
4912  if (PREACTION(m))
4913  return 0;
4914  if ((((size_t)(mem)) & (alignment - 1)) != 0) { /* misaligned */
4915  /*
4916  Find an aligned spot inside chunk. Since we need to give
4917  back leading space in a chunk of at least MIN_CHUNK_SIZE, if
4918  the first calculation places us at a spot with less than
4919  MIN_CHUNK_SIZE leader, we can move to the next aligned spot.
4920  We've allocated enough total room so that this is always
4921  possible.
4922  */
4923  char* br = (char*)mem2chunk((size_t)(((size_t)((char*)mem + alignment -
4924  SIZE_T_ONE)) &
4925  -alignment));
4926  char* pos = ((size_t)(br - (char*)(p)) >= MIN_CHUNK_SIZE)?
4927  br : br+alignment;
4928  mchunkptr newp = (mchunkptr)pos;
4929  size_t leadsize = pos - (char*)(p);
4930  size_t newsize = chunksize(p) - leadsize;
4931 
4932  if (is_mmapped(p)) { /* For mmapped chunks, just adjust offset */
4933  newp->prev_foot = p->prev_foot + leadsize;
4934  newp->head = newsize;
4935  }
4936  else { /* Otherwise, give back leader, use the rest */
4937  set_inuse(m, newp, newsize);
4938  set_inuse(m, p, leadsize);
4939  dispose_chunk(m, p, leadsize);
4940  }
4941  p = newp;
4942  }
4943 
4944  /* Give back spare room at the end */
4945  if (!is_mmapped(p)) {
4946  size_t size = chunksize(p);
4947  if (size > nb + MIN_CHUNK_SIZE) {
4948  size_t remainder_size = size - nb;
4949  mchunkptr remainder = chunk_plus_offset(p, nb);
4950  set_inuse(m, p, nb);
4951  set_inuse(m, remainder, remainder_size);
4952  dispose_chunk(m, remainder, remainder_size);
4953  }
4954  }
4955 
4956  mem = chunk2mem(p);
4957  assert (chunksize(p) >= nb);
4958  assert(((size_t)mem & (alignment - 1)) == 0);
4959  check_inuse_chunk(m, p);
4960  POSTACTION(m);
4961  }
4962  }
4963  return mem;
4964 }
4965 
4966 /*
4967  Common support for independent_X routines, handling
4968  all of the combinations that can result.
4969  The opts arg has:
4970  bit 0 set if all elements are same size (using sizes[0])
4971  bit 1 set if elements should be zeroed
4972 */
4973 static void** ialloc(mstate m,
4974  size_t n_elements,
4975  size_t* sizes,
4976  int opts,
4977  void* chunks[]) {
4978 
4979  size_t element_size; /* chunksize of each element, if all same */
4980  size_t contents_size; /* total size of elements */
4981  size_t array_size; /* request size of pointer array */
4982  void* mem; /* malloced aggregate space */
4983  mchunkptr p; /* corresponding chunk */
4984  size_t remainder_size; /* remaining bytes while splitting */
4985  void** marray; /* either "chunks" or malloced ptr array */
4986  mchunkptr array_chunk; /* chunk for malloced ptr array */
4987  flag_t was_enabled; /* to disable mmap */
4988  size_t size;
4989  size_t i;
4990 
4992  /* compute array length, if needed */
4993  if (chunks != 0) {
4994  if (n_elements == 0)
4995  return chunks; /* nothing to do */
4996  marray = chunks;
4997  array_size = 0;
4998  }
4999  else {
5000  /* if empty req, must still return chunk representing empty array */
5001  if (n_elements == 0)
5002  return (void**)internal_malloc(m, 0);
5003  marray = 0;
5004  array_size = request2size(n_elements * (sizeof(void*)));
5005  }
5006 
5007  /* compute total element size */
5008  if (opts & 0x1) { /* all-same-size */
5009  element_size = request2size(*sizes);
5010  contents_size = n_elements * element_size;
5011  }
5012  else { /* add up all the sizes */
5013  element_size = 0;
5014  contents_size = 0;
5015  for (i = 0; i != n_elements; ++i)
5016  contents_size += request2size(sizes[i]);
5017  }
5018 
5019  size = contents_size + array_size;
5020 
5021  /*
5022  Allocate the aggregate chunk. First disable direct-mmapping so
5023  malloc won't use it, since we would not be able to later
5024  free/realloc space internal to a segregated mmap region.
5025  */
5026  was_enabled = use_mmap(m);
5027  disable_mmap(m);
5028  mem = internal_malloc(m, size - CHUNK_OVERHEAD);
5029  if (was_enabled)
5030  enable_mmap(m);
5031  if (mem == 0)
5032  return 0;
5033 
5034  if (PREACTION(m)) return 0;
5035  p = mem2chunk(mem);
5036  remainder_size = chunksize(p);
5037 
5038  assert(!is_mmapped(p));
5039 
5040  if (opts & 0x2) { /* optionally clear the elements */
5041  memset((size_t*)mem, 0, remainder_size - SIZE_T_SIZE - array_size);
5042  }
5043 
5044  /* If not provided, allocate the pointer array as final part of chunk */
5045  if (marray == 0) {
5046  size_t array_chunk_size;
5047  array_chunk = chunk_plus_offset(p, contents_size);
5048  array_chunk_size = remainder_size - contents_size;
5049  marray = (void**) (chunk2mem(array_chunk));
5050  set_size_and_pinuse_of_inuse_chunk(m, array_chunk, array_chunk_size);
5051  remainder_size = contents_size;
5052  }
5053 
5054  /* split out elements */
5055  for (i = 0; ; ++i) {
5056  marray[i] = chunk2mem(p);
5057  if (i != n_elements-1) {
5058  if (element_size != 0)
5059  size = element_size;
5060  else
5061  size = request2size(sizes[i]);
5062  remainder_size -= size;
5064  p = chunk_plus_offset(p, size);
5065  }
5066  else { /* the final element absorbs any overallocation slop */
5067  set_size_and_pinuse_of_inuse_chunk(m, p, remainder_size);
5068  break;
5069  }
5070  }
5071 
5072 #if DEBUG
5073  if (marray != chunks) {
5074  /* final element must have exactly exhausted chunk */
5075  if (element_size != 0) {
5076  assert(remainder_size == element_size);
5077  }
5078  else {
5079  assert(remainder_size == request2size(sizes[i]));
5080  }
5081  check_inuse_chunk(m, mem2chunk(marray));
5082  }
5083  for (i = 0; i != n_elements; ++i)
5084  check_inuse_chunk(m, mem2chunk(marray[i]));
5085 
5086 #endif /* DEBUG */
5087 
5088  POSTACTION(m);
5089  return marray;
5090 }
5091 
5092 /* Try to free all pointers in the given array.
5093  Note: this could be made faster, by delaying consolidation,
5094  at the price of disabling some user integrity checks, We
5095  still optimize some consolidations by combining adjacent
5096  chunks before freeing, which will occur often if allocated
5097  with ialloc or the array is sorted.
5098 */
5099 static size_t internal_bulk_free(mstate m, void* array[], size_t nelem) {
5100  size_t unfreed = 0;
5101  if (!PREACTION(m)) {
5102  void** a;
5103  void** fence = &(array[nelem]);
5104  for (a = array; a != fence; ++a) {
5105  void* mem = *a;
5106  if (mem != 0) {
5107  mchunkptr p = mem2chunk(mem);
5108  size_t psize = chunksize(p);
5109 #if FOOTERS
5110  if (get_mstate_for(p) != m) {
5111  ++unfreed;
5112  continue;
5113  }
5114 #endif
5115  check_inuse_chunk(m, p);
5116  *a = 0;
5117  if (RTCHECK(ok_address(m, p) && ok_inuse(p))) {
5118  void ** b = a + 1; /* try to merge with next chunk */
5119  mchunkptr next = next_chunk(p);
5120  if (b != fence && *b == chunk2mem(next)) {
5121  size_t newsize = chunksize(next) + psize;
5122  set_inuse(m, p, newsize);
5123  *b = chunk2mem(p);
5124  }
5125  else
5126  dispose_chunk(m, p, psize);
5127  }
5128  else {
5130  break;
5131  }
5132  }
5133  }
5134  if (should_trim(m, m->topsize))
5135  sys_trim(m, 0);
5136  POSTACTION(m);
5137  }
5138  return unfreed;
5139 }
5140 
5141 /* Traversal */
5142 #if MALLOC_INSPECT_ALL
5143 static void internal_inspect_all(mstate m,
5144  void(*handler)(void *start,
5145  void *end,
5146  size_t used_bytes,
5147  void* callback_arg),
5148  void* arg) {
5149  if (is_initialized(m)) {
5150  mchunkptr top = m->top;
5151  msegmentptr s;
5152  for (s = &m->seg; s != 0; s = s->next) {
5153  mchunkptr q = align_as_chunk(s->base);
5154  while (segment_holds(s, q) && q->head != FENCEPOST_HEAD) {
5155  mchunkptr next = next_chunk(q);
5156  size_t sz = chunksize(q);
5157  size_t used;
5158  void* start;
5159  if (is_inuse(q)) {
5160  used = sz - CHUNK_OVERHEAD; /* must not be mmapped */
5161  start = chunk2mem(q);
5162  }
5163  else {
5164  used = 0;
5165  if (is_small(sz)) { /* offset by possible bookkeeping */
5166  start = (void*)((char*)q + sizeof(struct malloc_chunk));
5167  }
5168  else {
5169  start = (void*)((char*)q + sizeof(struct malloc_tree_chunk));
5170  }
5171  }
5172  if (start < (void*)next) /* skip if all space is bookkeeping */
5173  handler(start, next, used, arg);
5174  if (q == top)
5175  break;
5176  q = next;
5177  }
5178  }
5179  }
5180 }
5181 #endif /* MALLOC_INSPECT_ALL */
5182 
5183 /* ------------------ Exported realloc, memalign, etc -------------------- */
5184 
5185 #if !ONLY_MSPACES
5186 
5187 void* dlrealloc(void* oldmem, size_t bytes) {
5188  void* mem = 0;
5189  if (oldmem == 0) {
5190  mem = dlmalloc(bytes);
5191  }
5192  else if (bytes >= MAX_REQUEST) {
5194  }
5195 #ifdef REALLOC_ZERO_BYTES_FREES
5196  else if (bytes == 0) {
5197  dlfree(oldmem);
5198  }
5199 #endif /* REALLOC_ZERO_BYTES_FREES */
5200  else {
5201  size_t nb = request2size(bytes);
5202  mchunkptr oldp = mem2chunk(oldmem);
5203 #if ! FOOTERS
5204  mstate m = gm;
5205 #else /* FOOTERS */
5206  mstate m = get_mstate_for(oldp);
5207  if (!ok_magic(m)) {
5208  USAGE_ERROR_ACTION(m, oldmem);
5209  return 0;
5210  }
5211 #endif /* FOOTERS */
5212  if (!PREACTION(m)) {
5213  mchunkptr newp = try_realloc_chunk(m, oldp, nb, 1);
5214  POSTACTION(m);
5215  if (newp != 0) {
5216  check_inuse_chunk(m, newp);
5217  mem = chunk2mem(newp);
5218  }
5219  else {
5220  mem = internal_malloc(m, bytes);
5221  if (mem != 0) {
5222  size_t oc = chunksize(oldp) - overhead_for(oldp);
5223  memcpy(mem, oldmem, (oc < bytes)? oc : bytes);
5224  internal_free(m, oldmem);
5225  }
5226  }
5227  }
5228  }
5229  return mem;
5230 }
5231 
5232 void* dlrealloc_in_place(void* oldmem, size_t bytes) {
5233  void* mem = 0;
5234  if (oldmem != 0) {
5235  if (bytes >= MAX_REQUEST) {
5237  }
5238  else {
5239  size_t nb = request2size(bytes);
5240  mchunkptr oldp = mem2chunk(oldmem);
5241 #if ! FOOTERS
5242  mstate m = gm;
5243 #else /* FOOTERS */
5244  mstate m = get_mstate_for(oldp);
5245  if (!ok_magic(m)) {
5246  USAGE_ERROR_ACTION(m, oldmem);
5247  return 0;
5248  }
5249 #endif /* FOOTERS */
5250  if (!PREACTION(m)) {
5251  mchunkptr newp = try_realloc_chunk(m, oldp, nb, 0);
5252  POSTACTION(m);
5253  if (newp == oldp) {
5254  check_inuse_chunk(m, newp);
5255  mem = oldmem;
5256  }
5257  }
5258  }
5259  }
5260  return mem;
5261 }
5262 
5263 void* dlmemalign(size_t alignment, size_t bytes) {
5264  if (alignment <= MALLOC_ALIGNMENT) {
5265  return dlmalloc(bytes);
5266  }
5267  return internal_memalign(gm, alignment, bytes);
5268 }
5269 
5270 int dlposix_memalign(void** pp, size_t alignment, size_t bytes) {
5271  void* mem = 0;
5272  if (alignment == MALLOC_ALIGNMENT)
5273  mem = dlmalloc(bytes);
5274  else {
5275  size_t d = alignment / sizeof(void*);
5276  size_t r = alignment % sizeof(void*);
5277  if (r != 0 || d == 0 || (d & (d-SIZE_T_ONE)) != 0)
5278  return EINVAL;
5279  else if (bytes <= MAX_REQUEST - alignment) {
5280  if (alignment < MIN_CHUNK_SIZE)
5281  alignment = MIN_CHUNK_SIZE;
5282  mem = internal_memalign(gm, alignment, bytes);
5283  }
5284  }
5285  if (mem == 0)
5286  return ENOMEM;
5287  else {
5288  *pp = mem;
5289  return 0;
5290  }
5291 }
5292 
5293 void* dlvalloc(size_t bytes) {
5294  size_t pagesz;
5296  pagesz = mparams.page_size;
5297  return dlmemalign(pagesz, bytes);
5298 }
5299 
5300 void* dlpvalloc(size_t bytes) {
5301  size_t pagesz;
5303  pagesz = mparams.page_size;
5304  return dlmemalign(pagesz, (bytes + pagesz - SIZE_T_ONE) & ~(pagesz - SIZE_T_ONE));
5305 }
5306 
5307 void** dlindependent_calloc(size_t n_elements, size_t elem_size,
5308  void* chunks[]) {
5309  size_t sz = elem_size; /* serves as 1-element array */
5310  return ialloc(gm, n_elements, &sz, 3, chunks);
5311 }
5312 
5313 void** dlindependent_comalloc(size_t n_elements, size_t sizes[],
5314  void* chunks[]) {
5315  return ialloc(gm, n_elements, sizes, 0, chunks);
5316 }
5317 
5318 size_t dlbulk_free(void* array[], size_t nelem) {
5319  return internal_bulk_free(gm, array, nelem);
5320 }
5321 
5322 #if MALLOC_INSPECT_ALL
5323 void dlmalloc_inspect_all(void(*handler)(void *start,
5324  void *end,
5325  size_t used_bytes,
5326  void* callback_arg),
5327  void* arg) {
5329  if (!PREACTION(gm)) {
5330  internal_inspect_all(gm, handler, arg);
5331  POSTACTION(gm);
5332  }
5333 }
5334 #endif /* MALLOC_INSPECT_ALL */
5335 
5336 int dlmalloc_trim(size_t pad) {
5337  int result = 0;
5339  if (!PREACTION(gm)) {
5340  result = sys_trim(gm, pad);
5341  POSTACTION(gm);
5342  }
5343  return result;
5344 }
5345 
5346 size_t dlmalloc_footprint(void) {
5347  return gm->footprint;
5348 }
5349 
5351  return gm->max_footprint;
5352 }
5353 
5355  size_t maf = gm->footprint_limit;
5356  return maf == 0 ? MAX_SIZE_T : maf;
5357 }
5358 
5359 size_t dlmalloc_set_footprint_limit(size_t bytes) {
5360  size_t result; /* invert sense of 0 */
5361  if (bytes == 0)
5362  result = granularity_align(1); /* Use minimal size */
5363  if (bytes == MAX_SIZE_T)
5364  result = 0; /* disable */
5365  else
5366  result = granularity_align(bytes);
5367  return gm->footprint_limit = result;
5368 }
5369 
5370 #if !NO_MALLINFO
5371 struct mallinfo dlmallinfo(void) {
5372  return internal_mallinfo(gm);
5373 }
5374 #endif /* NO_MALLINFO */
5375 
5376 #if !NO_MALLOC_STATS
5378  internal_malloc_stats(gm);
5379 }
5380 #endif /* NO_MALLOC_STATS */
5381 
5382 int dlmallopt(int param_number, int value) {
5383  return change_mparam(param_number, value);
5384 }
5385 
5386 size_t dlmalloc_usable_size(void* mem) {
5387  if (mem != 0) {
5388  mchunkptr p = mem2chunk(mem);
5389  if (is_inuse(p))
5390  return chunksize(p) - overhead_for(p);
5391  }
5392  return 0;
5393 }
5394 
5395 #endif /* !ONLY_MSPACES */
5396 
5397 /* ----------------------------- user mspaces ---------------------------- */
5398 
5399 #if MSPACES
5400 
5401 static mstate init_user_mstate(char* tbase, size_t tsize) {
5402  size_t msize = pad_request(sizeof(struct malloc_state));
5403  mchunkptr mn;
5404  mchunkptr msp = align_as_chunk(tbase);
5405  mstate m = (mstate)(chunk2mem(msp));
5406  memset(m, 0, msize);
5407  (void)INITIAL_LOCK(&m->mutex);
5408  msp->head = (msize|INUSE_BITS);
5409  m->seg.base = m->least_addr = tbase;
5410  m->seg.size = m->footprint = m->max_footprint = tsize;
5411  m->magic = mparams.magic;
5413  m->mflags = mparams.default_mflags;
5414  m->extp = 0;
5415  m->exts = 0;
5416  disable_contiguous(m);
5417  init_bins(m);
5418  mn = next_chunk(mem2chunk(m));
5419  init_top(m, mn, (size_t)((tbase + tsize) - (char*)mn) - TOP_FOOT_SIZE);
5420  check_top_chunk(m, m->top);
5421  return m;
5422 }
5423 
5424 mspace create_mspace(size_t capacity, int locked) {
5425  mstate m = 0;
5426  size_t msize;
5428  msize = pad_request(sizeof(struct malloc_state));
5429  if (capacity < (size_t) -(msize + TOP_FOOT_SIZE + mparams.page_size)) {
5430  size_t rs = ((capacity == 0)? mparams.granularity :
5431  (capacity + TOP_FOOT_SIZE + msize));
5432  size_t tsize = granularity_align(rs);
5433  char* tbase = (char*)(CALL_MMAP(tsize));
5434  if (tbase != CMFAIL) {
5435  m = init_user_mstate(tbase, tsize);
5436  m->seg.sflags = USE_MMAP_BIT;
5437  set_lock(m, locked);
5438  }
5439  }
5440  return (mspace)m;
5441 }
5442 
5443 mspace create_mspace_with_base(void* base, size_t capacity, int locked) {
5444  mstate m = 0;
5445  size_t msize;
5447  msize = pad_request(sizeof(struct malloc_state));
5448  if (capacity > msize + TOP_FOOT_SIZE &&
5449  capacity < (size_t) -(msize + TOP_FOOT_SIZE + mparams.page_size)) {
5450  m = init_user_mstate((char*)base, capacity);
5451  m->seg.sflags = EXTERN_BIT;
5452  set_lock(m, locked);
5453  }
5454  return (mspace)m;
5455 }
5456 
5457 int mspace_track_large_chunks(mspace msp, int enable) {
5458  int ret = 0;
5459  mstate ms = (mstate)msp;
5460  if (!PREACTION(ms)) {
5461  if (!use_mmap(ms)) {
5462  ret = 1;
5463  }
5464  if (!enable) {
5465  enable_mmap(ms);
5466  } else {
5467  disable_mmap(ms);
5468  }
5469  POSTACTION(ms);
5470  }
5471  return ret;
5472 }
5473 
5474 size_t destroy_mspace(mspace msp) {
5475  size_t freed = 0;
5476  mstate ms = (mstate)msp;
5477  if (ok_magic(ms)) {
5478  msegmentptr sp = &ms->seg;
5479  (void)DESTROY_LOCK(&ms->mutex); /* destroy before unmapped */
5480  while (sp != 0) {
5481  char* base = sp->base;
5482  size_t size = sp->size;
5483  flag_t flag = sp->sflags;
5484  (void)base; /* placate people compiling -Wunused-variable */
5485  sp = sp->next;
5486  if ((flag & USE_MMAP_BIT) && !(flag & EXTERN_BIT) &&
5487  CALL_MUNMAP(base, size) == 0)
5488  freed += size;
5489  }
5490  }
5491  else {
5492  USAGE_ERROR_ACTION(ms,ms);
5493  }
5494  return freed;
5495 }
5496 
5497 /*
5498  mspace versions of routines are near-clones of the global
5499  versions. This is not so nice but better than the alternatives.
5500 */
5501 
5502 void* mspace_malloc(mspace msp, size_t bytes) {
5503  mstate ms = (mstate)msp;
5504  if (!ok_magic(ms)) {
5505  USAGE_ERROR_ACTION(ms,ms);
5506  return 0;
5507  }
5508  if (!PREACTION(ms)) {
5509  void* mem;
5510  size_t nb;
5511  if (bytes <= MAX_SMALL_REQUEST) {
5512  bindex_t idx;
5513  binmap_t smallbits;
5514  nb = (bytes < MIN_REQUEST)? MIN_CHUNK_SIZE : pad_request(bytes);
5515  idx = small_index(nb);
5516  smallbits = ms->smallmap >> idx;
5517 
5518  if ((smallbits & 0x3U) != 0) { /* Remainderless fit to a smallbin. */
5519  mchunkptr b, p;
5520  idx += ~smallbits & 1; /* Uses next bin if idx empty */
5521  b = smallbin_at(ms, idx);
5522  p = b->fd;
5523  assert(chunksize(p) == small_index2size(idx));
5524  unlink_first_small_chunk(ms, b, p, idx);
5526  mem = chunk2mem(p);
5527  check_malloced_chunk(ms, mem, nb);
5528  goto postaction;
5529  }
5530 
5531  else if (nb > ms->dvsize) {
5532  if (smallbits != 0) { /* Use chunk in next nonempty smallbin */
5533  mchunkptr b, p, r;
5534  size_t rsize;
5535  bindex_t i;
5536  binmap_t leftbits = (smallbits << idx) & left_bits(idx2bit(idx));
5537  binmap_t leastbit = least_bit(leftbits);
5538  compute_bit2idx(leastbit, i);
5539  b = smallbin_at(ms, i);
5540  p = b->fd;
5541  assert(chunksize(p) == small_index2size(i));
5542  unlink_first_small_chunk(ms, b, p, i);
5543  rsize = small_index2size(i) - nb;
5544  /* Fit here cannot be remainderless if 4byte sizes */
5545  if (SIZE_T_SIZE != 4 && rsize < MIN_CHUNK_SIZE)
5547  else {
5549  r = chunk_plus_offset(p, nb);
5551  replace_dv(ms, r, rsize);
5552  }
5553  mem = chunk2mem(p);
5554  check_malloced_chunk(ms, mem, nb);
5555  goto postaction;
5556  }
5557 
5558  else if (ms->treemap != 0 && (mem = tmalloc_small(ms, nb)) != 0) {
5559  check_malloced_chunk(ms, mem, nb);
5560  goto postaction;
5561  }
5562  }
5563  }
5564  else if (bytes >= MAX_REQUEST)
5565  nb = MAX_SIZE_T; /* Too big to allocate. Force failure (in sys alloc) */
5566  else {
5567  nb = pad_request(bytes);
5568  if (ms->treemap != 0 && (mem = tmalloc_large(ms, nb)) != 0) {
5569  check_malloced_chunk(ms, mem, nb);
5570  goto postaction;
5571  }
5572  }
5573 
5574  if (nb <= ms->dvsize) {
5575  size_t rsize = ms->dvsize - nb;
5576  mchunkptr p = ms->dv;
5577  if (rsize >= MIN_CHUNK_SIZE) { /* split dv */
5578  mchunkptr r = ms->dv = chunk_plus_offset(p, nb);
5579  ms->dvsize = rsize;
5582  }
5583  else { /* exhaust dv */
5584  size_t dvs = ms->dvsize;
5585  ms->dvsize = 0;
5586  ms->dv = 0;
5587  set_inuse_and_pinuse(ms, p, dvs);
5588  }
5589  mem = chunk2mem(p);
5590  check_malloced_chunk(ms, mem, nb);
5591  goto postaction;
5592  }
5593 
5594  else if (nb < ms->topsize) { /* Split top */
5595  size_t rsize = ms->topsize -= nb;
5596  mchunkptr p = ms->top;
5597  mchunkptr r = ms->top = chunk_plus_offset(p, nb);
5598  r->head = rsize | PINUSE_BIT;
5600  mem = chunk2mem(p);
5601  check_top_chunk(ms, ms->top);
5602  check_malloced_chunk(ms, mem, nb);
5603  goto postaction;
5604  }
5605 
5606  mem = sys_alloc(ms, nb);
5607 
5608  postaction:
5609  POSTACTION(ms);
5610  return mem;
5611  }
5612 
5613  return 0;
5614 }
5615 
5616 void mspace_free(mspace msp, void* mem) {
5617  if (mem != 0) {
5618  mchunkptr p = mem2chunk(mem);
5619 #if FOOTERS
5620  mstate fm = get_mstate_for(p);
5621  (void)msp; /* placate people compiling -Wunused */
5622 #else /* FOOTERS */
5623  mstate fm = (mstate)msp;
5624 #endif /* FOOTERS */
5625  if (!ok_magic(fm)) {
5626  USAGE_ERROR_ACTION(fm, p);
5627  return;
5628  }
5629  if (!PREACTION(fm)) {
5630  check_inuse_chunk(fm, p);
5631  if (RTCHECK(ok_address(fm, p) && ok_inuse(p))) {
5632  size_t psize = chunksize(p);
5633  mchunkptr next = chunk_plus_offset(p, psize);
5634  if (!pinuse(p)) {
5635  size_t prevsize = p->prev_foot;
5636  if (is_mmapped(p)) {
5637  psize += prevsize + MMAP_FOOT_PAD;
5638  if (CALL_MUNMAP((char*)p - prevsize, psize) == 0)
5639  fm->footprint -= psize;
5640  goto postaction;
5641  }
5642  else {
5643  mchunkptr prev = chunk_minus_offset(p, prevsize);
5644  psize += prevsize;
5645  p = prev;
5646  if (RTCHECK(ok_address(fm, prev))) { /* consolidate backward */
5647  if (p != fm->dv) {
5648  unlink_chunk(fm, p, prevsize);
5649  }
5650  else if ((next->head & INUSE_BITS) == INUSE_BITS) {
5651  fm->dvsize = psize;
5652  set_free_with_pinuse(p, psize, next);
5653  goto postaction;
5654  }
5655  }
5656  else
5657  goto erroraction;
5658  }
5659  }
5660 
5661  if (RTCHECK(ok_next(p, next) && ok_pinuse(next))) {
5662  if (!cinuse(next)) { /* consolidate forward */
5663  if (next == fm->top) {
5664  size_t tsize = fm->topsize += psize;
5665  fm->top = p;
5666  p->head = tsize | PINUSE_BIT;
5667  if (p == fm->dv) {
5668  fm->dv = 0;
5669  fm->dvsize = 0;
5670  }
5671  if (should_trim(fm, tsize))
5672  sys_trim(fm, 0);
5673  goto postaction;
5674  }
5675  else if (next == fm->dv) {
5676  size_t dsize = fm->dvsize += psize;
5677  fm->dv = p;
5679  goto postaction;
5680  }
5681  else {
5682  size_t nsize = chunksize(next);
5683  psize += nsize;
5684  unlink_chunk(fm, next, nsize);
5686  if (p == fm->dv) {
5687  fm->dvsize = psize;
5688  goto postaction;
5689  }
5690  }
5691  }
5692  else
5693  set_free_with_pinuse(p, psize, next);
5694 
5695  if (is_small(psize)) {
5696  insert_small_chunk(fm, p, psize);
5697  check_free_chunk(fm, p);
5698  }
5699  else {
5700  tchunkptr tp = (tchunkptr)p;
5701  insert_large_chunk(fm, tp, psize);
5702  check_free_chunk(fm, p);
5703  if (--fm->release_checks == 0)
5704  release_unused_segments(fm);
5705  }
5706  goto postaction;
5707  }
5708  }
5709  erroraction:
5710  USAGE_ERROR_ACTION(fm, p);
5711  postaction:
5712  POSTACTION(fm);
5713  }
5714  }
5715 }
5716 
5717 void* mspace_calloc(mspace msp, size_t n_elements, size_t elem_size) {
5718  void* mem;
5719  size_t req = 0;
5720  mstate ms = (mstate)msp;
5721  if (!ok_magic(ms)) {
5722  USAGE_ERROR_ACTION(ms,ms);
5723  return 0;
5724  }
5725  if (n_elements != 0) {
5726  req = n_elements * elem_size;
5727  if (((n_elements | elem_size) & ~(size_t)0xffff) &&
5728  (req / n_elements != elem_size))
5729  req = MAX_SIZE_T; /* force downstream failure on overflow */
5730  }
5731  mem = internal_malloc(ms, req);
5732  if (mem != 0 && calloc_must_clear(mem2chunk(mem)))
5733  memset(mem, 0, req);
5734  return mem;
5735 }
5736 
5737 void* mspace_realloc(mspace msp, void* oldmem, size_t bytes) {
5738  void* mem = 0;
5739  if (oldmem == 0) {
5740  mem = mspace_malloc(msp, bytes);
5741  }
5742  else if (bytes >= MAX_REQUEST) {
5744  }
5745 #ifdef REALLOC_ZERO_BYTES_FREES
5746  else if (bytes == 0) {
5747  mspace_free(msp, oldmem);
5748  }
5749 #endif /* REALLOC_ZERO_BYTES_FREES */
5750  else {
5751  size_t nb = request2size(bytes);
5752  mchunkptr oldp = mem2chunk(oldmem);
5753 #if ! FOOTERS
5754  mstate m = (mstate)msp;
5755 #else /* FOOTERS */
5756  mstate m = get_mstate_for(oldp);
5757  if (!ok_magic(m)) {
5758  USAGE_ERROR_ACTION(m, oldmem);
5759  return 0;
5760  }
5761 #endif /* FOOTERS */
5762  if (!PREACTION(m)) {
5763  mchunkptr newp = try_realloc_chunk(m, oldp, nb, 1);
5764  POSTACTION(m);
5765  if (newp != 0) {
5766  check_inuse_chunk(m, newp);
5767  mem = chunk2mem(newp);
5768  }
5769  else {
5770  mem = mspace_malloc(m, bytes);
5771  if (mem != 0) {
5772  size_t oc = chunksize(oldp) - overhead_for(oldp);
5773  memcpy(mem, oldmem, (oc < bytes)? oc : bytes);
5774  mspace_free(m, oldmem);
5775  }
5776  }
5777  }
5778  }
5779  return mem;
5780 }
5781 
5782 void* mspace_realloc_in_place(mspace msp, void* oldmem, size_t bytes) {
5783  void* mem = 0;
5784  if (oldmem != 0) {
5785  if (bytes >= MAX_REQUEST) {
5787  }
5788  else {
5789  size_t nb = request2size(bytes);
5790  mchunkptr oldp = mem2chunk(oldmem);
5791 #if ! FOOTERS
5792  mstate m = (mstate)msp;
5793 #else /* FOOTERS */
5794  mstate m = get_mstate_for(oldp);
5795  (void)msp; /* placate people compiling -Wunused */
5796  if (!ok_magic(m)) {
5797  USAGE_ERROR_ACTION(m, oldmem);
5798  return 0;
5799  }
5800 #endif /* FOOTERS */
5801  if (!PREACTION(m)) {
5802  mchunkptr newp = try_realloc_chunk(m, oldp, nb, 0);
5803  POSTACTION(m);
5804  if (newp == oldp) {
5805  check_inuse_chunk(m, newp);
5806  mem = oldmem;
5807  }
5808  }
5809  }
5810  }
5811  return mem;
5812 }
5813 
5814 void* mspace_memalign(mspace msp, size_t alignment, size_t bytes) {
5815  mstate ms = (mstate)msp;
5816  if (!ok_magic(ms)) {
5817  USAGE_ERROR_ACTION(ms,ms);
5818  return 0;
5819  }
5820  if (alignment <= MALLOC_ALIGNMENT)
5821  return mspace_malloc(msp, bytes);
5822  return internal_memalign(ms, alignment, bytes);
5823 }
5824 
5825 int mspace_posix_memalign(mspace msp, void** pp, size_t alignment, size_t bytes) {
5826  mstate ms = (mstate)msp;
5827  void* mem = 0;
5828  if (alignment == MALLOC_ALIGNMENT)
5829  mem = mspace_malloc(msp,bytes);
5830  else {
5831  size_t d = alignment / sizeof(void*);
5832  size_t r = alignment % sizeof(void*);
5833  if (r != 0 || d == 0 || (d & (d-SIZE_T_ONE)) != 0)
5834  return EINVAL;
5835  else if (bytes <= MAX_REQUEST - alignment) {
5836  if (alignment < MIN_CHUNK_SIZE)
5837  alignment = MIN_CHUNK_SIZE;
5838  mem = internal_memalign(ms, alignment, bytes);
5839  }
5840  }
5841  if (mem == 0)
5842  return ENOMEM;
5843  else {
5844  *pp = mem;
5845  return 0;
5846  }
5847 }
5848 
5849 void** mspace_independent_calloc(mspace msp, size_t n_elements,
5850  size_t elem_size, void* chunks[]) {
5851  size_t sz = elem_size; /* serves as 1-element array */
5852  mstate ms = (mstate)msp;
5853  if (!ok_magic(ms)) {
5854  USAGE_ERROR_ACTION(ms,ms);
5855  return 0;
5856  }
5857  return ialloc(ms, n_elements, &sz, 3, chunks);
5858 }
5859 
5860 void** mspace_independent_comalloc(mspace msp, size_t n_elements,
5861  size_t sizes[], void* chunks[]) {
5862  mstate ms = (mstate)msp;
5863  if (!ok_magic(ms)) {
5864  USAGE_ERROR_ACTION(ms,ms);
5865  return 0;
5866  }
5867  return ialloc(ms, n_elements, sizes, 0, chunks);
5868 }
5869 
5870 size_t mspace_bulk_free(mspace msp, void* array[], size_t nelem) {
5871  return internal_bulk_free((mstate)msp, array, nelem);
5872 }
5873 
5874 #if MALLOC_INSPECT_ALL
5875 void mspace_inspect_all(mspace msp,
5876  void(*handler)(void *start,
5877  void *end,
5878  size_t used_bytes,
5879  void* callback_arg),
5880  void* arg) {
5881  mstate ms = (mstate)msp;
5882  if (ok_magic(ms)) {
5883  if (!PREACTION(ms)) {
5884  internal_inspect_all(ms, handler, arg);
5885  POSTACTION(ms);
5886  }
5887  }
5888  else {
5889  USAGE_ERROR_ACTION(ms,ms);
5890  }
5891 }
5892 #endif /* MALLOC_INSPECT_ALL */
5893 
5894 int mspace_trim(mspace msp, size_t pad) {
5895  int result = 0;
5896  mstate ms = (mstate)msp;
5897  if (ok_magic(ms)) {
5898  if (!PREACTION(ms)) {
5899  result = sys_trim(ms, pad);
5900  POSTACTION(ms);
5901  }
5902  }
5903  else {
5904  USAGE_ERROR_ACTION(ms,ms);
5905  }
5906  return result;
5907 }
5908 
5909 #if !NO_MALLOC_STATS
5910 void mspace_malloc_stats(mspace msp) {
5911  mstate ms = (mstate)msp;
5912  if (ok_magic(ms)) {
5913  internal_malloc_stats(ms);
5914  }
5915  else {
5916  USAGE_ERROR_ACTION(ms,ms);
5917  }
5918 }
5919 #endif /* NO_MALLOC_STATS */
5920 
5921 size_t mspace_footprint(mspace msp) {
5922  size_t result = 0;
5923  mstate ms = (mstate)msp;
5924  if (ok_magic(ms)) {
5925  result = ms->footprint;
5926  }
5927  else {
5928  USAGE_ERROR_ACTION(ms,ms);
5929  }
5930  return result;
5931 }
5932 
5933 size_t mspace_max_footprint(mspace msp) {
5934  size_t result = 0;
5935  mstate ms = (mstate)msp;
5936  if (ok_magic(ms)) {
5937  result = ms->max_footprint;
5938  }
5939  else {
5940  USAGE_ERROR_ACTION(ms,ms);
5941  }
5942  return result;
5943 }
5944 
5945 size_t mspace_footprint_limit(mspace msp) {
5946  size_t result = 0;
5947  mstate ms = (mstate)msp;
5948  if (ok_magic(ms)) {
5949  size_t maf = ms->footprint_limit;
5950  result = (maf == 0) ? MAX_SIZE_T : maf;
5951  }
5952  else {
5953  USAGE_ERROR_ACTION(ms,ms);
5954  }
5955  return result;
5956 }
5957 
5958 size_t mspace_set_footprint_limit(mspace msp, size_t bytes) {
5959  size_t result = 0;
5960  mstate ms = (mstate)msp;
5961  if (ok_magic(ms)) {
5962  if (bytes == 0)
5963  result = granularity_align(1); /* Use minimal size */
5964  if (bytes == MAX_SIZE_T)
5965  result = 0; /* disable */
5966  else
5967  result = granularity_align(bytes);
5968  ms->footprint_limit = result;
5969  }
5970  else {
5971  USAGE_ERROR_ACTION(ms,ms);
5972  }
5973  return result;
5974 }
5975 
5976 #if !NO_MALLINFO
5977 struct mallinfo mspace_mallinfo(mspace msp) {
5978  mstate ms = (mstate)msp;
5979  if (!ok_magic(ms)) {
5980  USAGE_ERROR_ACTION(ms,ms);
5981  }
5982  return internal_mallinfo(ms);
5983 }
5984 #endif /* NO_MALLINFO */
5985 
5986 size_t mspace_usable_size(const void* mem) {
5987  if (mem != 0) {
5988  mchunkptr p = mem2chunk(mem);
5989  if (is_inuse(p))
5990  return chunksize(p) - overhead_for(p);
5991  }
5992  return 0;
5993 }
5994 
5995 int mspace_mallopt(int param_number, int value) {
5996  return change_mparam(param_number, value);
5997 }
5998 
5999 #endif /* MSPACES */
6000 
6001 
6002 /* -------------------- Alternative MORECORE functions ------------------- */
6003 
6004 /*
6005  Guidelines for creating a custom version of MORECORE:
6006 
6007  * For best performance, MORECORE should allocate in multiples of pagesize.
6008  * MORECORE may allocate more memory than requested. (Or even less,
6009  but this will usually result in a malloc failure.)
6010  * MORECORE must not allocate memory when given argument zero, but
6011  instead return one past the end address of memory from previous
6012  nonzero call.
6013  * For best performance, consecutive calls to MORECORE with positive
6014  arguments should return increasing addresses, indicating that
6015  space has been contiguously extended.
6016  * Even though consecutive calls to MORECORE need not return contiguous
6017  addresses, it must be OK for malloc'ed chunks to span multiple
6018  regions in those cases where they do happen to be contiguous.
6019  * MORECORE need not handle negative arguments -- it may instead
6020  just return MFAIL when given negative arguments.
6021  Negative arguments are always multiples of pagesize. MORECORE
6022  must not misinterpret negative args as large positive unsigned
6023  args. You can suppress all such calls from even occurring by defining
6024  MORECORE_CANNOT_TRIM,
6025 
6026  As an example alternative MORECORE, here is a custom allocator
6027  kindly contributed for pre-OSX macOS. It uses virtually but not
6028  necessarily physically contiguous non-paged memory (locked in,
6029  present and won't get swapped out). You can use it by uncommenting
6030  this section, adding some #includes, and setting up the appropriate
6031  defines above:
6032 
6033  #define MORECORE osMoreCore
6034 
6035  There is also a shutdown routine that should somehow be called for
6036  cleanup upon program exit.
6037 
6038  #define MAX_POOL_ENTRIES 100
6039  #define MINIMUM_MORECORE_SIZE (64 * 1024U)
6040  static int next_os_pool;
6041  void *our_os_pools[MAX_POOL_ENTRIES];
6042 
6043  void *osMoreCore(int size)
6044  {
6045  void *ptr = 0;
6046  static void *sbrk_top = 0;
6047 
6048  if (size > 0)
6049  {
6050  if (size < MINIMUM_MORECORE_SIZE)
6051  size = MINIMUM_MORECORE_SIZE;
6052  if (CurrentExecutionLevel() == kTaskLevel)
6053  ptr = PoolAllocateResident(size + RM_PAGE_SIZE, 0);
6054  if (ptr == 0)
6055  {
6056  return (void *) MFAIL;
6057  }
6058  // save ptrs so they can be freed during cleanup
6059  our_os_pools[next_os_pool] = ptr;
6060  next_os_pool++;
6061  ptr = (void *) ((((size_t) ptr) + RM_PAGE_MASK) & ~RM_PAGE_MASK);
6062  sbrk_top = (char *) ptr + size;
6063  return ptr;
6064  }
6065  else if (size < 0)
6066  {
6067  // we don't currently support shrink behavior
6068  return (void *) MFAIL;
6069  }
6070  else
6071  {
6072  return sbrk_top;
6073  }
6074  }
6075 
6076  // cleanup any allocated memory pools
6077  // called as last thing before shutting down driver
6078 
6079  void osCleanupMem(void)
6080  {
6081  void **ptr;
6082 
6083  for (ptr = our_os_pools; ptr < &our_os_pools[MAX_POOL_ENTRIES]; ptr++)
6084  if (*ptr)
6085  {
6086  PoolDeallocate(*ptr);
6087  *ptr = 0;
6088  }
6089  }
6090 
6091 */
6092 
6093 
6094 /* -----------------------------------------------------------------------
6095 History:
6096  v2.8.6 Wed Aug 29 06:57:58 2012 Doug Lea
6097  * fix bad comparison in dlposix_memalign
6098  * don't reuse adjusted asize in sys_alloc
6099  * add LOCK_AT_FORK -- thanks to Kirill Artamonov for the suggestion
6100  * reduce compiler warnings -- thanks to all who reported/suggested these
6101 
6102  v2.8.5 Sun May 22 10:26:02 2011 Doug Lea (dl at gee)
6103  * Always perform unlink checks unless INSECURE
6104  * Add posix_memalign.
6105  * Improve realloc to expand in more cases; expose realloc_in_place.
6106  Thanks to Peter Buhr for the suggestion.
6107  * Add footprint_limit, inspect_all, bulk_free. Thanks
6108  to Barry Hayes and others for the suggestions.
6109  * Internal refactorings to avoid calls while holding locks
6110  * Use non-reentrant locks by default. Thanks to Roland McGrath
6111  for the suggestion.
6112  * Small fixes to mspace_destroy, reset_on_error.
6113  * Various configuration extensions/changes. Thanks
6114  to all who contributed these.
6115 
6116  V2.8.4a Thu Apr 28 14:39:43 2011 (dl at gee.cs.oswego.edu)
6117  * Update Creative Commons URL
6118 
6119  V2.8.4 Wed May 27 09:56:23 2009 Doug Lea (dl at gee)
6120  * Use zeros instead of prev foot for is_mmapped
6121  * Add mspace_track_large_chunks; thanks to Jean Brouwers
6122  * Fix set_inuse in internal_realloc; thanks to Jean Brouwers
6123  * Fix insufficient sys_alloc padding when using 16byte alignment
6124  * Fix bad error check in mspace_footprint
6125  * Adaptations for ptmalloc; thanks to Wolfram Gloger.
6126  * Reentrant spin locks; thanks to Earl Chew and others
6127  * Win32 improvements; thanks to Niall Douglas and Earl Chew
6128  * Add NO_SEGMENT_TRAVERSAL and MAX_RELEASE_CHECK_RATE options
6129  * Extension hook in malloc_state
6130  * Various small adjustments to reduce warnings on some compilers
6131  * Various configuration extensions/changes for more platforms. Thanks
6132  to all who contributed these.
6133 
6134  V2.8.3 Thu Sep 22 11:16:32 2005 Doug Lea (dl at gee)
6135  * Add max_footprint functions
6136  * Ensure all appropriate literals are size_t
6137  * Fix conditional compilation problem for some #define settings
6138  * Avoid concatenating segments with the one provided
6139  in create_mspace_with_base
6140  * Rename some variables to avoid compiler shadowing warnings
6141  * Use explicit lock initialization.
6142  * Better handling of sbrk interference.
6143  * Simplify and fix segment insertion, trimming and mspace_destroy
6144  * Reinstate REALLOC_ZERO_BYTES_FREES option from 2.7.x
6145  * Thanks especially to Dennis Flanagan for help on these.
6146 
6147  V2.8.2 Sun Jun 12 16:01:10 2005 Doug Lea (dl at gee)
6148  * Fix memalign brace error.
6149 
6150  V2.8.1 Wed Jun 8 16:11:46 2005 Doug Lea (dl at gee)
6151  * Fix improper #endif nesting in C++
6152  * Add explicit casts needed for C++
6153 
6154  V2.8.0 Mon May 30 14:09:02 2005 Doug Lea (dl at gee)
6155  * Use trees for large bins
6156  * Support mspaces
6157  * Use segments to unify sbrk-based and mmap-based system allocation,
6158  removing need for emulation on most platforms without sbrk.
6159  * Default safety checks
6160  * Optional footer checks. Thanks to William Robertson for the idea.
6161  * Internal code refactoring
6162  * Incorporate suggestions and platform-specific changes.
6163  Thanks to Dennis Flanagan, Colin Plumb, Niall Douglas,
6164  Aaron Bachmann, Emery Berger, and others.
6165  * Speed up non-fastbin processing enough to remove fastbins.
6166  * Remove useless cfree() to avoid conflicts with other apps.
6167  * Remove internal memcpy, memset. Compilers handle builtins better.
6168  * Remove some options that no one ever used and rename others.
6169 
6170  V2.7.2 Sat Aug 17 09:07:30 2002 Doug Lea (dl at gee)
6171  * Fix malloc_state bitmap array misdeclaration
6172 
6173  V2.7.1 Thu Jul 25 10:58:03 2002 Doug Lea (dl at gee)
6174  * Allow tuning of FIRST_SORTED_BIN_SIZE
6175  * Use PTR_UINT as type for all ptr->int casts. Thanks to John Belmonte.
6176  * Better detection and support for non-contiguousness of MORECORE.
6177  Thanks to Andreas Mueller, Conal Walsh, and Wolfram Gloger
6178  * Bypass most of malloc if no frees. Thanks To Emery Berger.
6179  * Fix freeing of old top non-contiguous chunk im sysmalloc.
6180  * Raised default trim and map thresholds to 256K.
6181  * Fix mmap-related #defines. Thanks to Lubos Lunak.
6182  * Fix copy macros; added LACKS_FCNTL_H. Thanks to Neal Walfield.
6183  * Branch-free bin calculation
6184  * Default trim and mmap thresholds now 256K.
6185 
6186  V2.7.0 Sun Mar 11 14:14:06 2001 Doug Lea (dl at gee)
6187  * Introduce independent_comalloc and independent_calloc.
6188  Thanks to Michael Pachos for motivation and help.
6189  * Make optional .h file available
6190  * Allow > 2GB requests on 32bit systems.
6191  * new WIN32 sbrk, mmap, munmap, lock code from <Walter@GeNeSys-e.de>.
6192  Thanks also to Andreas Mueller <a.mueller at paradatec.de>,
6193  and Anonymous.
6194  * Allow override of MALLOC_ALIGNMENT (Thanks to Ruud Waij for
6195  helping test this.)
6196  * memalign: check alignment arg
6197  * realloc: don't try to shift chunks backwards, since this
6198  leads to more fragmentation in some programs and doesn't
6199  seem to help in any others.
6200  * Collect all cases in malloc requiring system memory into sysmalloc
6201  * Use mmap as backup to sbrk
6202  * Place all internal state in malloc_state
6203  * Introduce fastbins (although similar to 2.5.1)
6204  * Many minor tunings and cosmetic improvements
6205  * Introduce USE_PUBLIC_MALLOC_WRAPPERS, USE_MALLOC_LOCK
6206  * Introduce MALLOC_FAILURE_ACTION, MORECORE_CONTIGUOUS
6207  Thanks to Tony E. Bennett <tbennett@nvidia.com> and others.
6208  * Include errno.h to support default failure action.
6209 
6210  V2.6.6 Sun Dec 5 07:42:19 1999 Doug Lea (dl at gee)
6211  * return null for negative arguments
6212  * Added Several WIN32 cleanups from Martin C. Fong <mcfong at yahoo.com>
6213  * Add 'LACKS_SYS_PARAM_H' for those systems without 'sys/param.h'
6214  (e.g. WIN32 platforms)
6215  * Cleanup header file inclusion for WIN32 platforms
6216  * Cleanup code to avoid Microsoft Visual C++ compiler complaints
6217  * Add 'USE_DL_PREFIX' to quickly allow co-existence with existing
6218  memory allocation routines
6219  * Set 'malloc_getpagesize' for WIN32 platforms (needs more work)
6220  * Use 'assert' rather than 'ASSERT' in WIN32 code to conform to
6221  usage of 'assert' in non-WIN32 code
6222  * Improve WIN32 'sbrk()' emulation's 'findRegion()' routine to
6223  avoid infinite loop
6224  * Always call 'fREe()' rather than 'free()'
6225 
6226  V2.6.5 Wed Jun 17 15:57:31 1998 Doug Lea (dl at gee)
6227  * Fixed ordering problem with boundary-stamping
6228 
6229  V2.6.3 Sun May 19 08:17:58 1996 Doug Lea (dl at gee)
6230  * Added pvalloc, as recommended by H.J. Liu
6231  * Added 64bit pointer support mainly from Wolfram Gloger
6232  * Added anonymously donated WIN32 sbrk emulation
6233  * Malloc, calloc, getpagesize: add optimizations from Raymond Nijssen
6234  * malloc_extend_top: fix mask error that caused wastage after
6235  foreign sbrks
6236  * Add linux mremap support code from HJ Liu
6237 
6238  V2.6.2 Tue Dec 5 06:52:55 1995 Doug Lea (dl at gee)
6239  * Integrated most documentation with the code.
6240  * Add support for mmap, with help from
6241  Wolfram Gloger (Gloger@lrz.uni-muenchen.de).
6242  * Use last_remainder in more cases.
6243  * Pack bins using idea from colin@nyx10.cs.du.edu
6244  * Use ordered bins instead of best-fit threshhold
6245  * Eliminate block-local decls to simplify tracing and debugging.
6246  * Support another case of realloc via move into top
6247  * Fix error occuring when initial sbrk_base not word-aligned.
6248  * Rely on page size for units instead of SBRK_UNIT to
6249  avoid surprises about sbrk alignment conventions.
6250  * Add mallinfo, mallopt. Thanks to Raymond Nijssen
6251  (raymond@es.ele.tue.nl) for the suggestion.
6252  * Add `pad' argument to malloc_trim and top_pad mallopt parameter.
6253  * More precautions for cases where other routines call sbrk,
6254  courtesy of Wolfram Gloger (Gloger@lrz.uni-muenchen.de).
6255  * Added macros etc., allowing use in linux libc from
6256  H.J. Lu (hjl@gnu.ai.mit.edu)
6257  * Inverted this history list
6258 
6259  V2.6.1 Sat Dec 2 14:10:57 1995 Doug Lea (dl at gee)
6260  * Re-tuned and fixed to behave more nicely with V2.6.0 changes.
6261  * Removed all preallocation code since under current scheme
6262  the work required to undo bad preallocations exceeds
6263  the work saved in good cases for most test programs.
6264  * No longer use return list or unconsolidated bins since
6265  no scheme using them consistently outperforms those that don't
6266  given above changes.
6267  * Use best fit for very large chunks to prevent some worst-cases.
6268  * Added some support for debugging
6269 
6270  V2.6.0 Sat Nov 4 07:05:23 1995 Doug Lea (dl at gee)
6271  * Removed footers when chunks are in use. Thanks to
6272  Paul Wilson (wilson@cs.texas.edu) for the suggestion.
6273 
6274  V2.5.4 Wed Nov 1 07:54:51 1995 Doug Lea (dl at gee)
6275  * Added malloc_trim, with help from Wolfram Gloger
6276  (wmglo@Dent.MED.Uni-Muenchen.DE).
6277 
6278  V2.5.3 Tue Apr 26 10:16:01 1994 Doug Lea (dl at g)
6279 
6280  V2.5.2 Tue Apr 5 16:20:40 1994 Doug Lea (dl at g)
6281  * realloc: try to expand in both directions
6282  * malloc: swap order of clean-bin strategy;
6283  * realloc: only conditionally expand backwards
6284  * Try not to scavenge used bins
6285  * Use bin counts as a guide to preallocation
6286  * Occasionally bin return list chunks in first scan
6287  * Add a few optimizations from colin@nyx10.cs.du.edu
6288 
6289  V2.5.1 Sat Aug 14 15:40:43 1993 Doug Lea (dl at g)
6290  * faster bin computation & slightly different binning
6291  * merged all consolidations to one part of malloc proper
6292  (eliminating old malloc_find_space & malloc_clean_bin)
6293  * Scan 2 returns chunks (not just 1)
6294  * Propagate failure in realloc if malloc returns 0
6295  * Add stuff to allow compilation on non-ANSI compilers
6296  from kpv@research.att.com
6297 
6298  V2.5 Sat Aug 7 07:41:59 1993 Doug Lea (dl at g.oswego.edu)
6299  * removed potential for odd address access in prev_chunk
6300  * removed dependency on getpagesize.h
6301  * misc cosmetics and a bit more internal documentation
6302  * anticosmetics: mangled names in macros to evade debugger strangeness
6303  * tested on sparc, hp-700, dec-mips, rs6000
6304  with gcc & native cc (hp, dec only) allowing
6305  Detlefs & Zorn comparison study (in SIGPLAN Notices.)
6306 
6307  Trial version Fri Aug 28 13:14:29 1992 Doug Lea (dl at g.oswego.edu)
6308  * Based loosely on libg++-1.2X malloc. (It retains some of the overall
6309  structure of old version, but most details differ.)
6310 
6311 */
#define segment_holds(S, A)
Definition: dlmalloc.c:2699
#define dlpvalloc
Definition: dlmalloc.c:824
#define TOP_FOOT_SIZE
Definition: dlmalloc.c:2735
#define RTCHECK(e)
Definition: dlmalloc.c:3041
#define dlmalloc
Definition: dlmalloc.c:818
flag_t default_mflags
Definition: dlmalloc.c:2626
#define USE_LOCK_BIT
Definition: dlmalloc.c:2040
#define chunk_plus_offset(p, s)
Definition: dlmalloc.c:2274
#define CALL_MMAP(s)
Definition: dlmalloc.c:1736
mchunkptr top
Definition: dlmalloc.c:2591
#define cinuse(p)
Definition: dlmalloc.c:2261
#define dlbulk_free
Definition: dlmalloc.c:837
#define enable_mmap(M)
Definition: dlmalloc.c:2658
size_t footprint_limit
Definition: dlmalloc.c:2599
#define dlmemalign
Definition: dlmalloc.c:819
MALLINFO_FIELD_TYPE arena
Definition: dlmalloc.c:764
#define treemap_is_marked(M, i)
Definition: dlmalloc.c:2929
#define MIN_LARGE_SIZE
Definition: dlmalloc.c:2580
MALLINFO_FIELD_TYPE hblks
Definition: dlmalloc.c:767
#define dlmallinfo
Definition: dlmalloc.c:825
#define smallmap_is_marked(M, i)
Definition: dlmalloc.c:2925
struct malloc_chunk * bk
Definition: dlmalloc.c:2193
#define dlcalloc
Definition: dlmalloc.c:816
#define is_mmapped_segment(S)
Definition: dlmalloc.c:2481
struct malloc_segment * msegmentptr
Definition: dlmalloc.c:2485
#define dlposix_memalign
Definition: dlmalloc.c:820
#define MLOCK_T
Definition: dlmalloc.c:2008
#define insert_large_chunk(M, X, S)
Definition: dlmalloc.c:3650
#define insert_chunk(M, P, S)
Definition: dlmalloc.c:3791
#define NTREEBINS
Definition: dlmalloc.c:2576
#define dlvalloc
Definition: dlmalloc.c:823
#define is_initialized(M)
Definition: dlmalloc.c:2643
struct malloc_tree_chunk * parent
Definition: dlmalloc.c:2406
#define chunk2mem(p)
Definition: dlmalloc.c:2223
#define is_global(M)
Definition: dlmalloc.c:2639
#define is_page_aligned(S)
Definition: dlmalloc.c:2693
void * p
result_t handler(struct probe *probe, tid_t tid, void *data, struct probe *trigger, struct probe *base)
Definition: probetargets.c:59
#define RELEASE_LOCK(lk)
Definition: dlmalloc.c:2010
static uint64_t unsigned int i
#define FORCEINLINE
Definition: dlmalloc.c:808
#define is_aligned(A)
Definition: dlmalloc.c:1622
unsigned int binmap_t
Definition: dlmalloc.c:2200
tbinptr treebins[NTREEBINS]
Definition: dlmalloc.c:2596
size_t trim_check
Definition: dlmalloc.c:2592
size_t release_checks
Definition: dlmalloc.c:2593
#define malloc_getpagesize
Definition: dlmalloc.c:1590
#define ok_magic(M)
Definition: dlmalloc.c:3033
#define mem2chunk(mem)
Definition: dlmalloc.c:2224
#define internal_malloc(m, b)
Definition: dlmalloc.c:3812
#define MALLOC_ALIGNMENT
Definition: dlmalloc.c:618
#define leftmost_child(t)
Definition: dlmalloc.c:2415
#define replace_dv(M, P, S)
Definition: dlmalloc.c:3636
#define USE_MMAP_BIT
Definition: dlmalloc.c:1731
#define MIN_REQUEST
Definition: dlmalloc.c:2230
#define FENCEPOST_HEAD
Definition: dlmalloc.c:2258
#define unlink_chunk(M, P, S)
Definition: dlmalloc.c:3795
MALLINFO_FIELD_TYPE ordblks
Definition: dlmalloc.c:765
#define unlink_large_chunk(M, X)
Definition: dlmalloc.c:3718
#define pad_request(req)
Definition: dlmalloc.c:2233
#define MFAIL
Definition: dlmalloc.c:1639
struct malloc_state * mstate
Definition: dlmalloc.c:2609
#define dlmalloc_footprint_limit
Definition: dlmalloc.c:832
#define fm
flag_t sflags
Definition: dlmalloc.c:2478
#define assert(x)
Definition: dlmalloc.c:1456
unsigned int flag_t
Definition: dlmalloc.c:2201
#define HAVE_MMAP
Definition: dlmalloc.c:640
size_t max_footprint
Definition: dlmalloc.c:2598
#define small_index2size(i)
Definition: dlmalloc.c:2831
#define leftshift_for_tree_index(i)
Definition: dlmalloc.c:2907
#define CALL_DIRECT_MMAP(s)
Definition: dlmalloc.c:1746
#define dlindependent_comalloc
Definition: dlmalloc.c:836
#define check_mmapped_chunk(M, P)
Definition: dlmalloc.c:2801
struct malloc_chunk * sbinptr
Definition: dlmalloc.c:2198
#define idx2bit(i)
Definition: dlmalloc.c:2920
#define NSMALLBINS
Definition: dlmalloc.c:2575
#define check_malloc_state(M)
Definition: dlmalloc.c:2802
size_t exts
Definition: dlmalloc.c:2606
#define dlrealloc_in_place
Definition: dlmalloc.c:822
#define CHUNK_ALIGN_MASK
Definition: dlmalloc.c:1619
#define clear_pinuse(p)
Definition: dlmalloc.c:2269
msegment seg
Definition: dlmalloc.c:2604
struct malloc_tree_chunk * fd
Definition: dlmalloc.c:2402
#define unlink_first_small_chunk(M, B, P, I)
Definition: dlmalloc.c:3617
#define check_malloced_chunk(M, P, N)
Definition: dlmalloc.c:2800
void * MORECORE(int size)
#define dlmalloc_set_footprint_limit
Definition: dlmalloc.c:833
#define DEFAULT_MMAP_THRESHOLD
Definition: dlmalloc.c:687
#define check_top_chunk(M, P)
Definition: dlmalloc.c:2803
size_t head
Definition: dlmalloc.c:2191
void * extp
Definition: dlmalloc.c:2605
#define internal_free(m, mem)
Definition: dlmalloc.c:3813
#define dlrealloc
Definition: dlmalloc.c:821
#define mmap_align(S)
Definition: dlmalloc.c:2687
struct malloc_segment * next
Definition: dlmalloc.c:2477
#define insert_small_chunk(M, P, S)
Definition: dlmalloc.c:3572
size_t magic
Definition: dlmalloc.c:2594
#define POSTACTION(M)
Definition: dlmalloc.c:2749
#define ok_inuse(p)
Definition: dlmalloc.c:3018
#define SIZE_T_SIZE
Definition: dlmalloc.c:1604
#define DLMALLOC_EXPORT
Definition: dlmalloc.c:530
struct malloc_chunk * mchunkptr
Definition: dlmalloc.c:2197
mchunkptr smallbins[(NSMALLBINS+1)*2]
Definition: dlmalloc.c:2595
#define MAX_RELEASE_CHECK_RATE
Definition: dlmalloc.c:694
size_t prev_foot
Definition: dlmalloc.c:2190
#define MALLINFO_FIELD_TYPE
Definition: dlmalloc.c:709
#define gm
Definition: dlmalloc.c:2638
#define is_inuse(p)
Definition: dlmalloc.c:2264
#define HAVE_MORECORE
Definition: dlmalloc.c:660
#define left_bits(x)
Definition: dlmalloc.c:2935
#define compute_tree_index(S, I)
Definition: dlmalloc.c:2883
#define MAX_SIZE_T
Definition: dlmalloc.c:585
#define MMAP_FOOT_PAD
Definition: dlmalloc.c:2216
size_t trim_threshold
Definition: dlmalloc.c:2625
#define next_pinuse(p)
Definition: dlmalloc.c:2282
#define MORECORE_CONTIGUOUS
Definition: dlmalloc.c:668
#define dlmallopt
Definition: dlmalloc.c:826
size_t page_size
Definition: dlmalloc.c:2622
#define calloc_must_clear(p)
Definition: dlmalloc.c:2302
struct dt_argp_state opts
Definition: dumptarget.c:111
#define MAX_SMALL_REQUEST
Definition: dlmalloc.c:2582
#define INITIAL_LOCK(lk)
Definition: dlmalloc.c:2012
binmap_t smallmap
Definition: dlmalloc.c:2585
#define M_MMAP_THRESHOLD
Definition: dlmalloc.c:727
char * least_addr
Definition: dlmalloc.c:2589
#define dlindependent_calloc
Definition: dlmalloc.c:835
#define USE_NONCONTIGUOUS_BIT
Definition: dlmalloc.c:1773
#define dlmalloc_inspect_all
Definition: dlmalloc.c:834
#define use_mmap(M)
Definition: dlmalloc.c:2657
#define MALLOC_FAILURE_ACTION
Definition: dlmalloc.c:654
struct malloc_tree_chunk * tbinptr
Definition: dlmalloc.c:2412
#define ACQUIRE_LOCK(lk)
Definition: dlmalloc.c:2009
#define NO_SEGMENT_TRAVERSAL
Definition: dlmalloc.c:715
MALLINFO_FIELD_TYPE fordblks
Definition: dlmalloc.c:772
#define dlmalloc_stats
Definition: dlmalloc.c:828
size_t magic
Definition: dlmalloc.c:2621
struct malloc_chunk * fd
Definition: dlmalloc.c:2192
#define small_index(s)
Definition: dlmalloc.c:2830
struct malloc_tree_chunk * tchunkptr
Definition: dlmalloc.c:2411
#define ACQUIRE_MALLOC_GLOBAL_LOCK()
Definition: dlmalloc.c:2043
#define M_GRANULARITY
Definition: dlmalloc.c:726
#define chunk_minus_offset(p, s)
Definition: dlmalloc.c:2275
#define M_TRIM_THRESHOLD
Definition: dlmalloc.c:725
int len
Definition: dumptarget.c:52
#define prev_chunk(p)
Definition: dlmalloc.c:2279
#define SIZE_T_BITSIZE
Definition: dlmalloc.c:1605
#define check_inuse_chunk(M, P)
Definition: dlmalloc.c:2799
#define CHUNK_OVERHEAD
Definition: dlmalloc.c:2210
#define overhead_for(p)
Definition: dlmalloc.c:2297
#define least_bit(x)
Definition: dlmalloc.c:2932
#define set_lock(M, L)
Definition: dlmalloc.c:2668
binmap_t treemap
Definition: dlmalloc.c:2586
#define set_free_with_pinuse(p, s, n)
Definition: dlmalloc.c:2293
#define SYS_ALLOC_PADDING
Definition: dlmalloc.c:2691
unsigned int bindex_t
Definition: dlmalloc.c:2199
#define set_inuse(M, p, s)
Definition: dlmalloc.c:3056
#define use_noncontiguous(M)
Definition: dlmalloc.c:2665
#define MAX_REQUEST
Definition: dlmalloc.c:2229
#define HALF_MAX_SIZE_T
Definition: dlmalloc.c:1616
#define align_offset(A)
Definition: dlmalloc.c:1625
#define PINUSE_BIT
Definition: dlmalloc.c:2251
#define CALL_MUNMAP(a, s)
Definition: dlmalloc.c:1741
MALLINFO_FIELD_TYPE fsmblks
Definition: dlmalloc.c:770
#define USAGE_ERROR_ACTION(m, p)
Definition: dlmalloc.c:2788
#define dlmalloc_max_footprint
Definition: dlmalloc.c:831
MALLINFO_FIELD_TYPE keepcost
Definition: dlmalloc.c:773
#define RELEASE_MALLOC_GLOBAL_LOCK()
Definition: dlmalloc.c:2047
#define SIZE_T_ONE
Definition: dlmalloc.c:1610
#define ABORT
Definition: dlmalloc.c:624
struct malloc_tree_chunk * bk
Definition: dlmalloc.c:2403
#define INUSE_BITS
Definition: dlmalloc.c:2254
MLOCK_T mutex
Definition: dlmalloc.c:2602
#define set_size_and_pinuse_of_free_chunk(p, s)
Definition: dlmalloc.c:2289
size_t topsize
Definition: dlmalloc.c:2588
#define check_free_chunk(M, P)
Definition: dlmalloc.c:2798
#define CORRUPTION_ERROR_ACTION(m)
Definition: dlmalloc.c:2784
size_t granularity
Definition: dlmalloc.c:2623
#define ensure_initialization()
Definition: dlmalloc.c:2632
#define SIX_SIZE_T_SIZES
Definition: dlmalloc.c:1615
#define is_small(s)
Definition: dlmalloc.c:2829
MALLINFO_FIELD_TYPE hblkhd
Definition: dlmalloc.c:768
#define disable_contiguous(M)
Definition: dlmalloc.c:2666
#define pinuse(p)
Definition: dlmalloc.c:2262
#define mark_inuse_foot(M, p, s)
Definition: dlmalloc.c:3051
#define DESTROY_LOCK(lk)
Definition: dlmalloc.c:2013
size_t dvsize
Definition: dlmalloc.c:2587
struct malloc_tree_chunk * child[2]
Definition: dlmalloc.c:2405
#define dlfree
Definition: dlmalloc.c:817
#define dlmalloc_trim
Definition: dlmalloc.c:827
#define request2size(req)
Definition: dlmalloc.c:2237
#define MIN_CHUNK_SIZE
Definition: dlmalloc.c:2219
#define align_as_chunk(A)
Definition: dlmalloc.c:2226
mchunkptr dv
Definition: dlmalloc.c:2590
size_t footprint
Definition: dlmalloc.c:2597
flag_t mflags
Definition: dlmalloc.c:2600
#define is_mmapped(p)
Definition: dlmalloc.c:2265
#define dlmalloc_usable_size
Definition: dlmalloc.c:829
#define smallbin_at(M, i)
Definition: dlmalloc.c:2835
bindex_t index
Definition: dlmalloc.c:2407
#define next_chunk(p)
Definition: dlmalloc.c:2278
#define CALL_MORECORE(S)
Definition: dlmalloc.c:1721
#define ok_pinuse(p)
Definition: dlmalloc.c:3020
#define DEFAULT_TRIM_THRESHOLD
Definition: dlmalloc.c:680
#define set_inuse_and_pinuse(M, p, s)
Definition: dlmalloc.c:3061
#define compute_bit2idx(X, I)
Definition: dlmalloc.c:2970
#define disable_mmap(M)
Definition: dlmalloc.c:2660
#define CALL_MREMAP(addr, osz, nsz, mv)
Definition: dlmalloc.c:1769
#define CMFAIL
Definition: dlmalloc.c:1640
#define ok_next(p, n)
Definition: dlmalloc.c:3016
#define DEFAULT_GRANULARITY
Definition: dlmalloc.c:673
#define is_extern_segment(S)
Definition: dlmalloc.c:2482
#define ok_address(M, a)
Definition: dlmalloc.c:3014
#define treebin_at(M, i)
Definition: dlmalloc.c:2836
#define chunksize(p)
Definition: dlmalloc.c:2267
#define minsize_for_tree_index(i)
Definition: dlmalloc.c:2912
#define FOUR_SIZE_T_SIZES
Definition: dlmalloc.c:1614
#define set_size_and_pinuse_of_inuse_chunk(M, p, s)
Definition: dlmalloc.c:3066
#define granularity_align(S)
Definition: dlmalloc.c:2678
#define page_align(S)
Definition: dlmalloc.c:2674
#define dlmalloc_footprint
Definition: dlmalloc.c:830
#define should_trim(M, s)
Definition: dlmalloc.c:2725
#define MCHUNK_SIZE
Definition: dlmalloc.c:2205
#define EXTERN_BIT
Definition: dlmalloc.c:1776
struct target * t
Definition: dumptarget.c:48
MALLINFO_FIELD_TYPE uordblks
Definition: dlmalloc.c:771
MALLINFO_FIELD_TYPE smblks
Definition: dlmalloc.c:766
#define PREACTION(M)
Definition: dlmalloc.c:2748
MALLINFO_FIELD_TYPE usmblks
Definition: dlmalloc.c:769
size_t mmap_threshold
Definition: dlmalloc.c:2624