@@ -237,7 +237,7 @@ static void logical_end_heap_rewrite(RewriteState state);
237
237
* new_heapnew, locked heap relation to insert tuples to
238
238
* oldest_xminxid used by the caller to determine which tuples are dead
239
239
* freeze_xidxid before which tuples will be frozen
240
- *min_multi multixact before which multis will be removed
240
+ *cutoff_multi multixact before which multis will be removed
241
241
* use_walshould the inserts to the new heap be WAL-logged?
242
242
*
243
243
* Returns an opaque RewriteState, allocated in current memory context,
@@ -787,7 +787,7 @@ raw_heap_insert(RewriteState state, HeapTuple tup)
787
787
* Instead we simply write the mapping files out to disk, *before* the
788
788
* XLogInsert() is performed. That guarantees that either the XLogInsert() is
789
789
* inserted after the checkpoint's redo pointer or that the checkpoint (via
790
- *LogicalRewriteHeapCheckpoint ()) has flushed the (partial) mapping file to
790
+ *CheckPointLogicalRewriteHeap ()) has flushed the (partial) mapping file to
791
791
* disk. That leaves the tail end that has not yet been flushed open to
792
792
* corruption, which is solved by including the current offset in the
793
793
* xl_heap_rewrite_mapping records and truncating the mapping file to it