Kiln » largefiles » Unity
Clone URL:  

largefiles: more cleanup, comment clarification/updates, and readability

Changeset dd4a727edd72

Parent dbe037f9ad75

by Profile picture of User 521Andrew Pritchard <andrewp@fogcreek.com>

Changes to 2 files · Browse files at dd4a727edd72 Showing diff from parent dbe037f9ad75 Diff from another changeset...

Change 1 of 2 Show Entire File basestore.py Stacked
 
84
85
86
87
 
88
89
90
91
92
93
 
94
95
96
97
98
 
 
 
 
99
100
101
 
123
124
125
126
127
128
129
130
131
132
 
 
133
134
135
 
84
85
86
 
87
88
 
89
 
 
 
90
91
 
92
 
 
93
94
95
96
97
98
99
 
121
122
123
 
 
 
 
 
 
 
124
125
126
127
128
@@ -84,18 +84,16 @@
  tmpfile = os.fdopen(tmpfd, 'w')     try: - bhash = self._getfile(tmpfile, filename, hash) + hhash = binascii.hexlify(self._getfile(tmpfile, filename, hash))   except StoreError, err: - tmpfile.close()   ui.warn(err.longmessage()) - os.remove(tmpfilename) - missing.append(filename) - continue + hhash = ""   - hhash = binascii.hexlify(bhash)   if hhash != hash: - ui.warn(_('%s: data corruption (expected %s, got %s)\n') - % (filename, hash, hhash)) + if hhash != "": + ui.warn(_('%s: data corruption (expected %s, got %s)\n') + % (filename, hash, hhash)) + tmpfile.close() # no-op if it's already closed   os.remove(tmpfilename)   missing.append(filename)   continue @@ -123,13 +121,8 @@
  cctx = self.repo[rev]   cset = "%d:%s" % (cctx.rev(), node.short(cctx.node()))   - for standin in cctx: - failed = (self._verifyfile(cctx, - cset, - contents, - standin, - verified) - or failed) + failed = lfutil.any_(self._verifyfile( + cctx, cset, contents, standin, verified) for standin in cctx)     num_revs = len(verified)   num_lfiles = len(set([fname for (fname, fnode) in verified]))
Change 1 of 1 Show Entire File lfutil.py Stacked
 
175
176
177
178
 
179
180
181
182
 
183
184
185
186
 
187
188
189
190
191
192
 
 
 
193
194
 
 
195
196
197
198
 
199
200
201
202
203
204
205
206
207
 
175
176
177
 
178
179
180
181
 
182
183
 
 
 
184
185
186
187
188
 
 
189
190
191
192
 
193
194
195
 
 
 
196
197
 
 
 
 
 
198
199
200
@@ -175,33 +175,26 @@
  else:   lfdirstate = largefiles_dirstate(opener, ui, repo.root)   - # If the lfiles dirstate does not exist, populate and create it. This + # If the largefiles dirstate does not exist, populate and create it. This   # ensures that we create it on the first meaningful largefiles operation in   # a new clone. It also gives us an easy way to forcibly rebuild largefiles   # state: - # rm .hg/largefiles/dirstate && hg lfstatus + # rm .hg/largefiles/dirstate && hg status   # Or even, if things are really messed up: - # rm -rf .hg/largefiles && hg lfstatus - # (although that can lose data, e.g. pending big file revisions in - # .hg/largefiles/{pending,committed}). + # rm -rf .hg/largefiles && hg status   if not os.path.exists(os.path.join(admin, 'dirstate')):   util.makedirs(admin)   matcher = getstandinmatcher(repo)   for standin in dirstate_walk(repo.dirstate, matcher): - bigfile = splitstandin(standin) - hash = readstandin(repo, bigfile) + lfile = splitstandin(standin) + hash = readstandin(repo, lfile) + lfdirstate.normallookup(lfile)   try: - curhash = hashfile(bigfile) + if hash == hashfile(lfile): + lfdirstate.normal(lfile)   except IOError, err: - if err.errno == errno.ENOENT: - lfdirstate.normallookup(bigfile) - else: + if err.errno != errno.ENOENT:   raise - else: - if curhash == hash: - lfdirstate.normal(bigfile) - else: - lfdirstate.normallookup(bigfile)     lfdirstate.write()