HDDS-14629. Code cleanup after HDDS-13906#9773
HDDS-14629. Code cleanup after HDDS-13906#9773Russole wants to merge 4 commits intoapache:masterfrom
Conversation
| path = getDbStore().getDbLocation().toPath().resolve(dbFile.getFileName()).toAbsolutePath().toString(); | ||
| } | ||
| omdbArchiver.getHardLinkFileMap().put(path, fileId); | ||
| String path = dbFile.toFile().getAbsolutePath(); |
There was a problem hiding this comment.
dbFile comes from either Files.list(...) or from a List<Path> created in extractSSTFilesFromCompactionLog().
From directory listing:
try (Stream<Path> files = Files.list(dbDir)) {
collectFilesFromDir(..., files, ...);
}From compaction log (in extractSSTFilesFromCompactionLog()):
sstFiles.add(sstBackupDir.resolve(f.getFileName() + ROCKSDB_SST_SUFFIX));In both cases, dbFile is a valid Path (no null elements). Also, Path.resolve() never returns null — it returns a Path or throws NullPointerException if the argument is null.
So dbFile.toFile().getAbsolutePath() cannot be null in this flow.
|
Thanks @yandrey321 for the review. |
Gargi-jais11
left a comment
There was a problem hiding this comment.
Thanks @Russole for the patch.
Please find inlined comments.
| Path data = Files.createTempFile(DATA_PREFIX, DATA_SUFFIX); | ||
| Path metaDirPath = OMStorage.getOmDbDir(conf).toPath(); | ||
| Path metaDirPath = OMStorage.getOmDbDir(conf).toPath() | ||
| .toAbsolutePath().normalize(); |
There was a problem hiding this comment.
The temporary file data created is never cleaned up. Consider wrapping the logic around try block and delete the temp file in a finally block. Without this fix, every checkpoint operation creates an orphned temp file. Over time, this will consume disk space and could eventually cause the system to run out of space, leading to operation failures.
Path data = Files.createTempFile(DATA_PREFIX, DATA_SUFFIX);
try {
Path metaDirPath = OMStorage.getOmDbDir(conf).toPath()
.toAbsolutePath().normalize();
StringBuilder sb = new StringBuilder();
for (Map.Entry<String, String> entry : hardlinkFileMap.entrySet()) {
Path p = new File(entry.getKey()).toPath();
if (!p.isAbsolute()) {
p = metaDirPath.resolve(p);
}
p = p.toAbsolutePath().normalize();
String fileId = entry.getValue();
Path relativePath = metaDirPath.relativize(p);
sb.append(relativePath).append('\t').append(fileId).append('\n');
}
Files.write(data, sb.toString().getBytes(StandardCharsets.UTF_8), StandardOpenOption.TRUNCATE_EXISTING);
includeFile(data.toFile(), OmSnapshotManager.OM_HARDLINK_FILE, archiveOutputStream);
} finally {
Files.deleteIfExists(data);
}
|
Thanks for the suggestion, @Gargi-jais11. The temp file is now deleted in a finally block. |
|
@jojochuang Hi, this PR is currently under review. |
|
@jojochuang Please review the PR. |
What changes were proposed in this pull request?
OMDBArchiverto own and initialize the hardlink mapping internally, removingsetHardLinkFileMap()and nullable state.relativize()inwriteHardlinkFileto prevent abs/relative path mismatch exceptions.recordHardLinkMapping()instead.What is the link to the Apache JIRA
https://issues.apache.org/jira/browse/HDDS-14629
How was this patch tested?