1 Dynamic DMA mapping using the generic device 2 ============================================ 3 4 James E.J. Bottomley <James.Bottomley@HansenPartnership.com> 5 6This document describes the DMA API. For a more gentle introduction 7phrased in terms of the pci_ equivalents (and actual examples) see 8DMA-mapping.txt 9 10This API is split into two pieces. Part I describes the API and the 11corresponding pci_ API. Part II describes the extensions to the API 12for supporting non-consistent memory machines. Unless you know that 13your driver absolutely has to support non-consistent platforms (this 14is usually only legacy platforms) you should only use the API 15described in part I. 16 17Part I - pci_ and dma_ Equivalent API 18------------------------------------- 19 20To get the pci_ API, you must #include <linux/pci.h> 21To get the dma_ API, you must #include <linux/dma-mapping.h> 22 23 24Part Ia - Using large dma-coherent buffers 25------------------------------------------ 26 27void * 28dma_alloc_coherent(struct device *dev, size_t size, 29 dma_addr_t *dma_handle, int flag) 30void * 31pci_alloc_consistent(struct pci_dev *dev, size_t size, 32 dma_addr_t *dma_handle) 33 34Consistent memory is memory for which a write by either the device or 35the processor can immediately be read by the processor or device 36without having to worry about caching effects. (You may however need 37to make sure to flush the processor's write buffers before telling 38devices to read that memory.) 39 40This routine allocates a region of <size> bytes of consistent memory. 41it also returns a <dma_handle> which may be cast to an unsigned 42integer the same width as the bus and used as the physical address 43base of the region. 44 45Returns: a pointer to the allocated region (in the processor's virtual 46address space) or NULL if the allocation failed. 47 48Note: consistent memory can be expensive on some platforms, and the 49minimum allocation length may be as big as a page, so you should 50consolidate your requests for consistent memory as much as possible. 51The simplest way to do that is to use the dma_pool calls (see below). 52 53The flag parameter (dma_alloc_coherent only) allows the caller to 54specify the GFP_ flags (see kmalloc) for the allocation (the 55implementation may chose to ignore flags that affect the location of 56the returned memory, like GFP_DMA). For pci_alloc_consistent, you 57must assume GFP_ATOMIC behaviour. 58 59void 60dma_free_coherent(struct device *dev, size_t size, void *cpu_addr 61 dma_addr_t dma_handle) 62void 63pci_free_consistent(struct pci_dev *dev, size_t size, void *cpu_addr 64 dma_addr_t dma_handle) 65 66Free the region of consistent memory you previously allocated. dev, 67size and dma_handle must all be the same as those passed into the 68consistent allocate. cpu_addr must be the virtual address returned by 69the consistent allocate 70 71 72Part Ib - Using small dma-coherent buffers 73------------------------------------------ 74 75To get this part of the dma_ API, you must #include <linux/dmapool.h> 76 77Many drivers need lots of small dma-coherent memory regions for DMA 78descriptors or I/O buffers. Rather than allocating in units of a page 79or more using dma_alloc_coherent(), you can use DMA pools. These work 80much like a struct kmem_cache, except that they use the dma-coherent allocator 81not __get_free_pages(). Also, they understand common hardware constraints 82for alignment, like queue heads needing to be aligned on N byte boundaries. 83 84 85 struct dma_pool * 86 dma_pool_create(const char *name, struct device *dev, 87 size_t size, size_t align, size_t alloc); 88 89 struct pci_pool * 90 pci_pool_create(const char *name, struct pci_device *dev, 91 size_t size, size_t align, size_t alloc); 92 93The pool create() routines initialize a pool of dma-coherent buffers 94for use with a given device. It must be called in a context which 95can sleep. 96 97The "name" is for diagnostics (like a struct kmem_cache name); dev and size 98are like what you'd pass to dma_alloc_coherent(). The device's hardware 99alignment requirement for this type of data is "align" (which is expressed 100in bytes, and must be a power of two). If your device has no boundary 101crossing restrictions, pass 0 for alloc; passing 4096 says memory allocated 102from this pool must not cross 4KByte boundaries. 103 104 105 void *dma_pool_alloc(struct dma_pool *pool, int gfp_flags, 106 dma_addr_t *dma_handle); 107 108 void *pci_pool_alloc(struct pci_pool *pool, int gfp_flags, 109 dma_addr_t *dma_handle); 110 111This allocates memory from the pool; the returned memory will meet the size 112and alignment requirements specified at creation time. Pass GFP_ATOMIC to 113prevent blocking, or if it's permitted (not in_interrupt, not holding SMP locks) 114pass GFP_KERNEL to allow blocking. Like dma_alloc_coherent(), this returns 115two values: an address usable by the cpu, and the dma address usable by the 116pool's device. 117 118 119 void dma_pool_free(struct dma_pool *pool, void *vaddr, 120 dma_addr_t addr); 121 122 void pci_pool_free(struct pci_pool *pool, void *vaddr, 123 dma_addr_t addr); 124 125This puts memory back into the pool. The pool is what was passed to 126the pool allocation routine; the cpu and dma addresses are what 127were returned when that routine allocated the memory being freed. 128 129 130 void dma_pool_destroy(struct dma_pool *pool); 131 132 void pci_pool_destroy(struct pci_pool *pool); 133 134The pool destroy() routines free the resources of the pool. They must be 135called in a context which can sleep. Make sure you've freed all allocated 136memory back to the pool before you destroy it. 137 138 139Part Ic - DMA addressing limitations 140------------------------------------ 141 142int 143dma_supported(struct device *dev, u64 mask) 144int 145pci_dma_supported(struct device *dev, u64 mask) 146 147Checks to see if the device can support DMA to the memory described by 148mask. 149 150Returns: 1 if it can and 0 if it can't. 151 152Notes: This routine merely tests to see if the mask is possible. It 153won't change the current mask settings. It is more intended as an 154internal API for use by the platform than an external API for use by 155driver writers. 156 157int 158dma_set_mask(struct device *dev, u64 mask) 159int 160pci_set_dma_mask(struct pci_device *dev, u64 mask) 161 162Checks to see if the mask is possible and updates the device 163parameters if it is. 164 165Returns: 0 if successful and a negative error if not. 166 167u64 168dma_get_required_mask(struct device *dev) 169 170After setting the mask with dma_set_mask(), this API returns the 171actual mask (within that already set) that the platform actually 172requires to operate efficiently. Usually this means the returned mask 173is the minimum required to cover all of memory. Examining the 174required mask gives drivers with variable descriptor sizes the 175opportunity to use smaller descriptors as necessary. 176 177Requesting the required mask does not alter the current mask. If you 178wish to take advantage of it, you should issue another dma_set_mask() 179call to lower the mask again. 180 181 182Part Id - Streaming DMA mappings 183-------------------------------- 184 185dma_addr_t 186dma_map_single(struct device *dev, void *cpu_addr, size_t size, 187 enum dma_data_direction direction) 188dma_addr_t 189pci_map_single(struct device *dev, void *cpu_addr, size_t size, 190 int direction) 191 192Maps a piece of processor virtual memory so it can be accessed by the 193device and returns the physical handle of the memory. 194 195The direction for both api's may be converted freely by casting. 196However the dma_ API uses a strongly typed enumerator for its 197direction: 198 199DMA_NONE = PCI_DMA_NONE no direction (used for 200 debugging) 201DMA_TO_DEVICE = PCI_DMA_TODEVICE data is going from the 202 memory to the device 203DMA_FROM_DEVICE = PCI_DMA_FROMDEVICE data is coming from 204 the device to the 205 memory 206DMA_BIDIRECTIONAL = PCI_DMA_BIDIRECTIONAL direction isn't known 207 208Notes: Not all memory regions in a machine can be mapped by this 209API. Further, regions that appear to be physically contiguous in 210kernel virtual space may not be contiguous as physical memory. Since 211this API does not provide any scatter/gather capability, it will fail 212if the user tries to map a non physically contiguous piece of memory. 213For this reason, it is recommended that memory mapped by this API be 214obtained only from sources which guarantee to be physically contiguous 215(like kmalloc). 216 217Further, the physical address of the memory must be within the 218dma_mask of the device (the dma_mask represents a bit mask of the 219addressable region for the device. i.e. if the physical address of 220the memory anded with the dma_mask is still equal to the physical 221address, then the device can perform DMA to the memory). In order to 222ensure that the memory allocated by kmalloc is within the dma_mask, 223the driver may specify various platform dependent flags to restrict 224the physical memory range of the allocation (e.g. on x86, GFP_DMA 225guarantees to be within the first 16Mb of available physical memory, 226as required by ISA devices). 227 228Note also that the above constraints on physical contiguity and 229dma_mask may not apply if the platform has an IOMMU (a device which 230supplies a physical to virtual mapping between the I/O memory bus and 231the device). However, to be portable, device driver writers may *not* 232assume that such an IOMMU exists. 233 234Warnings: Memory coherency operates at a granularity called the cache 235line width. In order for memory mapped by this API to operate 236correctly, the mapped region must begin exactly on a cache line 237boundary and end exactly on one (to prevent two separately mapped 238regions from sharing a single cache line). Since the cache line size 239may not be known at compile time, the API will not enforce this 240requirement. Therefore, it is recommended that driver writers who 241don't take special care to determine the cache line size at run time 242only map virtual regions that begin and end on page boundaries (which 243are guaranteed also to be cache line boundaries). 244 245DMA_TO_DEVICE synchronisation must be done after the last modification 246of the memory region by the software and before it is handed off to 247the driver. Once this primitive is used. Memory covered by this 248primitive should be treated as read only by the device. If the device 249may write to it at any point, it should be DMA_BIDIRECTIONAL (see 250below). 251 252DMA_FROM_DEVICE synchronisation must be done before the driver 253accesses data that may be changed by the device. This memory should 254be treated as read only by the driver. If the driver needs to write 255to it at any point, it should be DMA_BIDIRECTIONAL (see below). 256 257DMA_BIDIRECTIONAL requires special handling: it means that the driver 258isn't sure if the memory was modified before being handed off to the 259device and also isn't sure if the device will also modify it. Thus, 260you must always sync bidirectional memory twice: once before the 261memory is handed off to the device (to make sure all memory changes 262are flushed from the processor) and once before the data may be 263accessed after being used by the device (to make sure any processor 264cache lines are updated with data that the device may have changed. 265 266void 267dma_unmap_single(struct device *dev, dma_addr_t dma_addr, size_t size, 268 enum dma_data_direction direction) 269void 270pci_unmap_single(struct pci_dev *hwdev, dma_addr_t dma_addr, 271 size_t size, int direction) 272 273Unmaps the region previously mapped. All the parameters passed in 274must be identical to those passed in (and returned) by the mapping 275API. 276 277dma_addr_t 278dma_map_page(struct device *dev, struct page *page, 279 unsigned long offset, size_t size, 280 enum dma_data_direction direction) 281dma_addr_t 282pci_map_page(struct pci_dev *hwdev, struct page *page, 283 unsigned long offset, size_t size, int direction) 284void 285dma_unmap_page(struct device *dev, dma_addr_t dma_address, size_t size, 286 enum dma_data_direction direction) 287void 288pci_unmap_page(struct pci_dev *hwdev, dma_addr_t dma_address, 289 size_t size, int direction) 290 291API for mapping and unmapping for pages. All the notes and warnings 292for the other mapping APIs apply here. Also, although the <offset> 293and <size> parameters are provided to do partial page mapping, it is 294recommended that you never use these unless you really know what the 295cache width is. 296 297int 298dma_mapping_error(dma_addr_t dma_addr) 299 300int 301pci_dma_mapping_error(dma_addr_t dma_addr) 302 303In some circumstances dma_map_single and dma_map_page will fail to create 304a mapping. A driver can check for these errors by testing the returned 305dma address with dma_mapping_error(). A non zero return value means the mapping 306could not be created and the driver should take appropriate action (eg 307reduce current DMA mapping usage or delay and try again later). 308 309 int 310 dma_map_sg(struct device *dev, struct scatterlist *sg, 311 int nents, enum dma_data_direction direction) 312 int 313 pci_map_sg(struct pci_dev *hwdev, struct scatterlist *sg, 314 int nents, int direction) 315 316Maps a scatter gather list from the block layer. 317 318Returns: the number of physical segments mapped (this may be shorted 319than <nents> passed in if the block layer determines that some 320elements of the scatter/gather list are physically adjacent and thus 321may be mapped with a single entry). 322 323Please note that the sg cannot be mapped again if it has been mapped once. 324The mapping process is allowed to destroy information in the sg. 325 326As with the other mapping interfaces, dma_map_sg can fail. When it 327does, 0 is returned and a driver must take appropriate action. It is 328critical that the driver do something, in the case of a block driver 329aborting the request or even oopsing is better than doing nothing and 330corrupting the filesystem. 331 332With scatterlists, you use the resulting mapping like this: 333 334 int i, count = dma_map_sg(dev, sglist, nents, direction); 335 struct scatterlist *sg; 336 337 for (i = 0, sg = sglist; i < count; i++, sg++) { 338 hw_address[i] = sg_dma_address(sg); 339 hw_len[i] = sg_dma_len(sg); 340 } 341 342where nents is the number of entries in the sglist. 343 344The implementation is free to merge several consecutive sglist entries 345into one (e.g. with an IOMMU, or if several pages just happen to be 346physically contiguous) and returns the actual number of sg entries it 347mapped them to. On failure 0, is returned. 348 349Then you should loop count times (note: this can be less than nents times) 350and use sg_dma_address() and sg_dma_len() macros where you previously 351accessed sg->address and sg->length as shown above. 352 353 void 354 dma_unmap_sg(struct device *dev, struct scatterlist *sg, 355 int nhwentries, enum dma_data_direction direction) 356 void 357 pci_unmap_sg(struct pci_dev *hwdev, struct scatterlist *sg, 358 int nents, int direction) 359 360unmap the previously mapped scatter/gather list. All the parameters 361must be the same as those and passed in to the scatter/gather mapping 362API. 363 364Note: <nents> must be the number you passed in, *not* the number of 365physical entries returned. 366 367void 368dma_sync_single(struct device *dev, dma_addr_t dma_handle, size_t size, 369 enum dma_data_direction direction) 370void 371pci_dma_sync_single(struct pci_dev *hwdev, dma_addr_t dma_handle, 372 size_t size, int direction) 373void 374dma_sync_sg(struct device *dev, struct scatterlist *sg, int nelems, 375 enum dma_data_direction direction) 376void 377pci_dma_sync_sg(struct pci_dev *hwdev, struct scatterlist *sg, 378 int nelems, int direction) 379 380synchronise a single contiguous or scatter/gather mapping. All the 381parameters must be the same as those passed into the single mapping 382API. 383 384Notes: You must do this: 385 386- Before reading values that have been written by DMA from the device 387 (use the DMA_FROM_DEVICE direction) 388- After writing values that will be written to the device using DMA 389 (use the DMA_TO_DEVICE) direction 390- before *and* after handing memory to the device if the memory is 391 DMA_BIDIRECTIONAL 392 393See also dma_map_single(). 394 395 396Part II - Advanced dma_ usage 397----------------------------- 398 399Warning: These pieces of the DMA API have no PCI equivalent. They 400should also not be used in the majority of cases, since they cater for 401unlikely corner cases that don't belong in usual drivers. 402 403If you don't understand how cache line coherency works between a 404processor and an I/O device, you should not be using this part of the 405API at all. 406 407void * 408dma_alloc_noncoherent(struct device *dev, size_t size, 409 dma_addr_t *dma_handle, int flag) 410 411Identical to dma_alloc_coherent() except that the platform will 412choose to return either consistent or non-consistent memory as it sees 413fit. By using this API, you are guaranteeing to the platform that you 414have all the correct and necessary sync points for this memory in the 415driver should it choose to return non-consistent memory. 416 417Note: where the platform can return consistent memory, it will 418guarantee that the sync points become nops. 419 420Warning: Handling non-consistent memory is a real pain. You should 421only ever use this API if you positively know your driver will be 422required to work on one of the rare (usually non-PCI) architectures 423that simply cannot make consistent memory. 424 425void 426dma_free_noncoherent(struct device *dev, size_t size, void *cpu_addr, 427 dma_addr_t dma_handle) 428 429free memory allocated by the nonconsistent API. All parameters must 430be identical to those passed in (and returned by 431dma_alloc_noncoherent()). 432 433int 434dma_is_consistent(struct device *dev, dma_addr_t dma_handle) 435 436returns true if the device dev is performing consistent DMA on the memory 437area pointed to by the dma_handle. 438 439int 440dma_get_cache_alignment(void) 441 442returns the processor cache alignment. This is the absolute minimum 443alignment *and* width that you must observe when either mapping 444memory or doing partial flushes. 445 446Notes: This API may return a number *larger* than the actual cache 447line, but it will guarantee that one or more cache lines fit exactly 448into the width returned by this call. It will also always be a power 449of two for easy alignment 450 451void 452dma_sync_single_range(struct device *dev, dma_addr_t dma_handle, 453 unsigned long offset, size_t size, 454 enum dma_data_direction direction) 455 456does a partial sync. starting at offset and continuing for size. You 457must be careful to observe the cache alignment and width when doing 458anything like this. You must also be extra careful about accessing 459memory you intend to sync partially. 460 461void 462dma_cache_sync(struct device *dev, void *vaddr, size_t size, 463 enum dma_data_direction direction) 464 465Do a partial sync of memory that was allocated by 466dma_alloc_noncoherent(), starting at virtual address vaddr and 467continuing on for size. Again, you *must* observe the cache line 468boundaries when doing this. 469 470int 471dma_declare_coherent_memory(struct device *dev, dma_addr_t bus_addr, 472 dma_addr_t device_addr, size_t size, int 473 flags) 474 475 476Declare region of memory to be handed out by dma_alloc_coherent when 477it's asked for coherent memory for this device. 478 479bus_addr is the physical address to which the memory is currently 480assigned in the bus responding region (this will be used by the 481platform to perform the mapping) 482 483device_addr is the physical address the device needs to be programmed 484with actually to address this memory (this will be handed out as the 485dma_addr_t in dma_alloc_coherent()) 486 487size is the size of the area (must be multiples of PAGE_SIZE). 488 489flags can be or'd together and are 490 491DMA_MEMORY_MAP - request that the memory returned from 492dma_alloc_coherent() be directly writable. 493 494DMA_MEMORY_IO - request that the memory returned from 495dma_alloc_coherent() be addressable using read/write/memcpy_toio etc. 496 497One or both of these flags must be present 498 499DMA_MEMORY_INCLUDES_CHILDREN - make the declared memory be allocated by 500dma_alloc_coherent of any child devices of this one (for memory residing 501on a bridge). 502 503DMA_MEMORY_EXCLUSIVE - only allocate memory from the declared regions. 504Do not allow dma_alloc_coherent() to fall back to system memory when 505it's out of memory in the declared region. 506 507The return value will be either DMA_MEMORY_MAP or DMA_MEMORY_IO and 508must correspond to a passed in flag (i.e. no returning DMA_MEMORY_IO 509if only DMA_MEMORY_MAP were passed in) for success or zero for 510failure. 511 512Note, for DMA_MEMORY_IO returns, all subsequent memory returned by 513dma_alloc_coherent() may no longer be accessed directly, but instead 514must be accessed using the correct bus functions. If your driver 515isn't prepared to handle this contingency, it should not specify 516DMA_MEMORY_IO in the input flags. 517 518As a simplification for the platforms, only *one* such region of 519memory may be declared per device. 520 521For reasons of efficiency, most platforms choose to track the declared 522region only at the granularity of a page. For smaller allocations, 523you should use the dma_pool() API. 524 525void 526dma_release_declared_memory(struct device *dev) 527 528Remove the memory region previously declared from the system. This 529API performs *no* in-use checking for this region and will return 530unconditionally having removed all the required structures. It is the 531drivers job to ensure that no parts of this memory region are 532currently in use. 533 534void * 535dma_mark_declared_memory_occupied(struct device *dev, 536 dma_addr_t device_addr, size_t size) 537 538This is used to occupy specific regions of the declared space 539(dma_alloc_coherent() will hand out the first free region it finds). 540 541device_addr is the *device* address of the region requested 542 543size is the size (and should be a page sized multiple). 544 545The return value will be either a pointer to the processor virtual 546address of the memory, or an error (via PTR_ERR()) if any part of the 547region is occupied. 548