Kiln » Kiln Storage Service Read More
Clone URL:  
Pushed to one repository · View In Graph Contained in tip

tip update to Kiln Storage Service 2.5.139

Changeset c5ec70d1ed42

Parent fca1d86acdc9

by Profile picture of User 12Benjamin Pollack <benjamin@fogcreek.com>

Changes to 42 files · Browse files at c5ec70d1ed42 Showing diff from parent fca1d86acdc9 Diff from another changeset...

Change 1 of 1 Show Entire File .hgignore Stacked
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
@@ -0,0 +1,18 @@
+syntax: glob +*.pyc +*.pyo +*.swp +*.db +*.sqlite3 +*.orig +kiln/dist/* +TAGS +\#*\# +local_settings.py +kiln/build/* +installer/Output +out.txt +*~ +_ReSharper.* +obj +bin
Change 1 of 1 Show Entire File README Stacked
 
1
2
3
 
4
5
6
 
1
2
 
3
4
5
6
@@ -1,6 +1,6 @@
 OVERVIEW   -This is a stand-alone server for Mercurial repositories, that provides +This is a stand-alone server for Mercurial repositories that provides  Mercurial data in the form of JSON requests. This allows for much  more efficient polling of repository data from long-running  applications, such as websites, IDEs, and so on.
Change 1 of 2 Show Entire File build.ps1 Stacked
 
1
2
 
 
 
 
 
 
 
 
3
4
5
 
10
11
12
 
 
 
 
13
 
 
 
 
14
15
 
16
17
18
 
1
2
3
4
5
6
7
8
9
10
11
12
13
 
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
@@ -1,5 +1,13 @@
 param([string] $repopath = "..")   +function Get-Batchfile ($file) { + $cmd = "`"$file`" & set" + cmd /c $cmd | Foreach-Object { + $p, $v = $_.split('=') + Set-Item -path env:$p -value $v + } +} +  function Get-ScriptDirectory  {   $Invocation = (Get-Variable MyInvocation -Scope 1).Value @@ -10,9 +18,18 @@
   pushd $path  pushd kiln +if (test-path 'c:\pythonve\kiln25') +{ + Get-Batchfile('c:\pythonve\kiln25\scripts\activate.bat') +}  python setup.py py2exe +if (test-path 'c:\pythonve\kiln25') +{ + Get-Batchfile('c:\pythonve\kiln25\scripts\deactivate.bat') +}  hg -R $repopath archive -t zip dist\source.zip  popd +c:\Windows\Microsoft.NET\Framework\v3.5\msbuild.exe /p:Configuration=Release installer\RepoDirectoryMigrator\RepoDirectoryMigrator.sln  $iscc = "C:\Program Files (x86)\Inno Setup 5\ISCC.exe"  if (-not (Test-Path $iscc))  {
 
1
2
 
3
4
5
 
33
34
35
 
 
 
36
37
38
 
39
 
 
 
 
40
41
42
43
44
45
46
47
48
 
56
57
58
 
 
 
 
59
60
 
 
 
61
62
63
64
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
65
66
67
 
72
73
74
 
75
76
77
 
133
134
135
 
 
136
137
 
 
138
139
140
 
149
150
151
 
 
152
 
 
 
 
 
 
153
154
155
156
157
 
158
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
159
160
161
162
163
 
 
 
 
 
 
 
 
 
 
 
164
165
166
167
168
169
170
 
 
 
 
 
171
172
173
 
183
184
185
 
 
 
 
 
 
 
 
 
 
 
 
 
 
186
187
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
 
2
3
4
5
 
33
34
35
36
37
38
39
 
40
41
42
43
44
45
46
47
48
49
50
 
 
51
52
53
 
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
 
105
106
107
108
109
110
111
 
167
168
169
170
171
172
173
174
175
176
177
178
 
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
231
232
233
234
235
236
237
238
239
240
241
242
243
244
245
246
247
248
 
249
250
251
252
253
254
255
256
 
266
267
268
269
270
271
272
273
274
275
276
277
278
279
280
281
282
283
284
285
286
287
288
289
290
291
292
293
294
295
296
297
298
299
300
301
302
303
304
305
@@ -1,5 +1,5 @@
 #define MyAppName "Kiln Storage Service" -#define MyAppVerName "Kiln Storage Service 1.0" +#define MyAppVerName "Kiln Storage Service 2.5"  #define MyAppPublisher "Fog Creek Software"  #define MyAppURL "http://www.fogcreek.com/kiln/"   @@ -33,16 +33,21 @@
 WelcomeLabel2=This will install [name/ver] on your computer.    [Files] + +Source: RepoDirectoryMigrator\RepoDirectoryMigrator\bin\x86\Release\RepoDirectoryMigrator.exe; DestDir: {tmp}; Flags: ignoreversion +Source: ctags.exe; DestDir: {app}; Flags: ignoreversion  Source: ..\kiln\dist\library.zip; DestDir: {app}; Flags: ignoreversion -Source: ..\kiln\dist\w9xpopen.exe; DestDir: {app}; Flags: ignoreversion  Source: ..\kiln\dist\backend.exe; DestDir: {app}; Flags: ignoreversion +Source: ..\kiln\redis-server.exe; DestDir: {app}; Flags: ignoreversion  Source: ..\kiln\dist\source.zip; DestDir: {app}; Flags: ignoreversion +Source: ..\kiln\dist\opengrok.jar; DestDir: {app}\opengrok; Flags: ignoreversion +Source: ..\kiln\dist\lib\*; DestDir: {app}\opengrok\lib; Flags: recursesubdirs replacesameversion; Excludes: .hg*,*~ +Source: ..\kiln\client.crt; DestDir: {app}; Flags: ignoreversion +Source: ..\kiln\client.key; DestDir: {app}; Flags: ignoreversion    [Icons]  Name: {group}\{cm:UninstallProgram,{#MyAppName}}; Filename: {uninstallexe}   -[Run] -Filename: {app}\backend.exe; Parameters: --startup auto install; StatusMsg: Registering Kiln Storage Service; Flags: runhidden  [UninstallRun]  Filename: {app}\backend.exe; Parameters: stop; StatusMsg: Stopping Kiln Storage Service; Flags: runhidden  Filename: {app}\backend.exe; Parameters: remove; StatusMsg: Removing Kiln Storage Service; Flags: runhidden @@ -56,12 +61,40 @@
  StorageLocation: String;   Port: Cardinal;   + JavaVersion: String; + + StoppedOldService: Boolean; +  const   REG_KEY = 'Software\Fog Creek Software\Kiln'; + OG_KEY = 'Software\Fog Creek Software\Kiln\OpenGrok'; + DAEMON_KEY = 'Software\Fog Creek Software\Kiln\Daemon'; + JAR = 'Jar';   BACKEND_IP = 'KilnBackendIP';   BACKEND_PORT = 'KilnBackendPort';   REPOSITORY_ROOT = 'KilnRepositoryRoot';   DELIBERATELY_PUBLIC = 'KilnDeliberatelyPublic'; + MINIREDIS_DB = 'MiniredisDB'; + DATA_DIR = 'DataDir'; + + INDEX_THREADS = 'IndexThreads'; + QUEUE_THREADS = 'QueueThreads'; + NINDEX_THREADS = 1; + NQUEUE_THREADS = 1; + + DAEMON_HOST = 'host'; + DAEMON_PORT = 'port'; + DAEMON_DB = 'db'; + DAEMON_SSL_KEY = 'ssl_key'; + DAEMON_SSL_CERT = 'ssl_cert'; + + JAVA_KEY = 'Software\JavaSoft\Java Runtime Environment'; + JAVA_VERSION = 'CurrentVersion'; + + JAVA = 'Java'; + CONFIG_UPDATE = 'ConfigUpdate'; + JAVA_HOME = 'JavaHome'; + CTAGS = 'CTags';    procedure InitializeWizard;  var @@ -72,6 +105,7 @@
  nextPageParent: Integer;   param: String;  begin + StoppedOldService := False;   LocalOnly := False;   for idx := 0 to ParamCount do   begin @@ -133,8 +167,12 @@
 procedure FinishInstall;  var   ip: String; + ogStorageLocation: String; + MiniredisDBLocation: String;   ResultCode: Integer;   deliberatelyPublic: Cardinal; + JavaLoc: String; + ret: Boolean;  begin   if (CompareStr(StorageLocation, '') = 0) then StorageLocation := StorageLocationPage.Values[0];   if Port = 0 then Port := StrToInt(PortNumberPage.Values[0]); @@ -149,25 +187,70 @@
  deliberatelyPublic := 1;   end;   + ogStorageLocation := StorageLocation + '\opengrokdata'; + MiniredisDBLocation := StorageLocation + '\miniredis.db';   if not DirExists(StorageLocation) then CreateDir(StorageLocation); + if not DirExists(ogStorageLocation) then CreateDir(ogStorageLocation); + + if IsWin64 then ret := RegQueryStringValue(HKLM64, JAVA_KEY + '\' + JavaVersion, JAVA_HOME, JavaLoc) + else ret := RegQueryStringValue(HKEY_LOCAL_MACHINE, JAVA_KEY + '\' + JavaVersion, JAVA_HOME, JavaLoc); + + JavaLoc := JavaLoc + '\bin\java.exe'     RegWriteStringValue(HKEY_LOCAL_MACHINE, REG_KEY, REPOSITORY_ROOT, StorageLocation);   RegWriteDWordValue(HKEY_LOCAL_MACHINE, REG_KEY, BACKEND_PORT, Port);   RegWriteStringValue(HKEY_LOCAL_MACHINE, REG_KEY, BACKEND_IP, ip);   RegWriteDWordValue(HKEY_LOCAL_MACHINE, REG_KEY, DELIBERATELY_PUBLIC, deliberatelyPublic); + RegWriteStringValue(HKEY_LOCAL_MACHINE, REG_KEY, MINIREDIS_DB, MiniredisDBLocation);   + RegWriteStringValue(HKEY_LOCAL_MACHINE, OG_KEY, JAR, ExpandConstant('{app}\opengrok\opengrok.jar')); + RegWriteStringValue(HKEY_LOCAL_MACHINE, OG_KEY, DATA_DIR, ogStorageLocation); + RegWriteStringValue(HKEY_LOCAL_MACHINE, OG_KEY, CONFIG_UPDATE, 'localhost:2424'); + RegWriteStringValue(HKEY_LOCAL_MACHINE, OG_KEY, JAVA, JavaLoc); + RegWriteStringValue(HKEY_LOCAL_MACHINE, OG_KEY, CTAGS, ExpandConstant('{app}\ctags.exe')); + + RegWriteStringValue(HKEY_LOCAL_MACHINE, DAEMON_KEY, DAEMON_HOST, 'localhost'); + RegWriteDWordValue(HKEY_LOCAL_MACHINE, DAEMON_KEY, DAEMON_PORT, Port + 1); + RegWriteDWordValue(HKEY_LOCAL_MACHINE, DAEMON_KEY, DAEMON_DB, 0); + RegWriteDWordValue(HKEY_LOCAL_MACHINE, DAEMON_KEY, INDEX_THREADS, NINDEX_THREADS); + RegWriteDWordValue(HKEY_LOCAL_MACHINE, DAEMON_KEY, QUEUE_THREADS, NQUEUE_THREADS); + RegWriteStringValue(HKEY_LOCAL_MACHINE, DAEMON_KEY, DAEMON_SSL_KEY, ExpandConstant('{app}\client.key')); + RegWriteStringValue(HKEY_LOCAL_MACHINE, DAEMON_KEY, DAEMON_SSL_CERT, ExpandConstant('{app}\client.crt')); + + if Exec(ExpandConstant('{tmp}\RepoDirectoryMigrator.exe'), '', '', SW_HIDE, ewWaitUntilTerminated, ResultCode) then begin + if ResultCode <> 0 then RaiseException('Failed to migrate repositories to new directory structure!'); + end; + + if Exec(ExpandConstant('{app}\backend.exe'), '--startup auto install', '', SW_HIDE, ewWaitUntilTerminated, ResultCode) then begin + if ResultCode <> 0 then RaiseException('Failed to install service!'); + end;   if Exec(ExpandConstant('{app}\backend.exe'), 'start', '', SW_HIDE, ewWaitUntilTerminated, ResultCode) then begin   if ResultCode <> 0 then RaiseException('Failed to start service!');   end;  end;   +procedure DeinitializeSetup(); +var + BackendPath: String; + ResultCode: Integer; +begin + if StoppedOldService then begin + BackendPath := ExpandConstant('{app}\backend.exe'); + Exec(BackendPath, 'start', '', SW_HIDE, ewNoWait, ResultCode); + end; +end; +  procedure HaltBackend;  var   BackendPath: String;   ResultCode: Integer;  begin   BackendPath := ExpandConstant('{app}\backend.exe'); - if FileExists(BackendPath) then Exec(BackendPath, 'stop', '', SW_HIDE, ewWaitUntilTerminated, ResultCode); + if FileExists(BackendPath) then begin + StoppedOldService := True; + Exec(BackendPath, 'stop', '', SW_HIDE, ewWaitUntilTerminated, ResultCode); + end; + Sleep(3000)  end;    procedure CurStepChanged(CurStep: TSetupStep); @@ -183,5 +266,40 @@
  RegDeleteValue(HKEY_LOCAL_MACHINE, REG_KEY, BACKEND_PORT);   RegDeleteValue(HKEY_LOCAL_MACHINE, REG_KEY, BACKEND_IP);   RegDeleteValue(HKEY_LOCAL_MACHINE, REG_KEY, DELIBERATELY_PUBLIC); + + RegDeleteValue(HKEY_LOCAL_MACHINE, OG_KEY, JAR); + RegDeleteValue(HKEY_LOCAL_MACHINE, OG_KEY, DATA_DIR); + RegDeleteValue(HKEY_LOCAL_MACHINE, OG_KEY, CONFIG_UPDATE); + RegDeleteValue(HKEY_LOCAL_MACHINE, OG_KEY, JAVA); + RegDeleteValue(HKEY_LOCAL_MACHINE, OG_KEY, CTAGS); + + RegDeleteValue(HKEY_LOCAL_MACHINE, DAEMON_KEY, DAEMON_HOST); + RegDeleteValue(HKEY_LOCAL_MACHINE, DAEMON_KEY, DAEMON_PORT); + RegDeleteValue(HKEY_LOCAL_MACHINE, DAEMON_KEY, DAEMON_DB); + RegDeleteValue(HKEY_LOCAL_MACHINE, DAEMON_KEY, INDEX_THREADS); + RegDeleteValue(HKEY_LOCAL_MACHINE, DAEMON_KEY, QUEUE_THREADS); + RegDeleteValue(HKEY_LOCAL_MACHINE, DAEMON_KEY, DAEMON_SSL_KEY); + RegDeleteValue(HKEY_LOCAL_MACHINE, DAEMON_KEY, DAEMON_SSL_CERT);   end;  end; + +function NextButtonClick(CurPageID: Integer) : Boolean; +var + version: String; + ret: Boolean; +begin + if CurPageID = wpWelcome then + begin + if IsWin64 then ret := RegQueryStringValue(HKLM64, JAVA_KEY, JAVA_VERSION, version) + else ret := RegQueryStringValue(HKEY_LOCAL_MACHINE, JAVA_KEY, JAVA_VERSION, version); + + if ret then JavaVersion := version + else + begin + MsgBox('The Kiln Storage Service requires the Java Runtime Environment (JRE) be installed. Please install the JRE for your platform from the Oracle website.', + mbInformation, MB_OK); + Abort(); + end; + end; + Result := True +end;
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
 
@@ -0,0 +1,48 @@
+<Configuration> + <SettingsComponent> + <string /> + <integer /> + <boolean> + <setting name="SolutionAnalysisEnabled">False</setting> + </boolean> + </SettingsComponent> + <RecentFiles> + <RecentFiles> + <File id="AFC5BBEB-4CA4-4AEA-8449-95B66478AC29/f:Program.cs" caret="398" fromTop="14" /> + </RecentFiles> + <RecentEdits> + <File id="AFC5BBEB-4CA4-4AEA-8449-95B66478AC29/f:Program.cs" caret="92" fromTop="3" /> + <File id="AFC5BBEB-4CA4-4AEA-8449-95B66478AC29/f:Program.cs" caret="228" fromTop="9" /> + <File id="AFC5BBEB-4CA4-4AEA-8449-95B66478AC29/f:Program.cs" caret="366" fromTop="14" /> + </RecentEdits> + </RecentFiles> + <NAntValidationSettings> + <NAntPath value="" /> + </NAntValidationSettings> + <UnitTestRunner> + <Providers /> + </UnitTestRunner> + <UnitTestRunnerNUnit> + <NUnitInstallDir IsNull="False"> + </NUnitInstallDir> + <UseAddins>Never</UseAddins> + </UnitTestRunnerNUnit> + <CompletionStatisticsManager> + <ItemStatistics item="Default"> + <Item value="using" priority="0" /> + <Item value="Microsoft" priority="0" /> + <Item value="Win32" priority="0" /> + <Item value="var" priority="2" /> + <Item value="Registry`0" priority="0" /> + <Item value="rk" priority="0" /> + <Item value="const" priority="1" /> + <Item value="string" priority="0" /> + <Item value="RegistryKey`0" priority="0" /> + <Item value="Environment`0" priority="0" /> + <Item value="root" priority="0" /> + </ItemStatistics> + <ItemStatistics item="Qualified:Microsoft.Win32.RegistryKey"> + <Item value="GetValue`0" priority="1" /> + </ItemStatistics> + </CompletionStatisticsManager> +</Configuration> \ No newline at end of file
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
@@ -0,0 +1,20 @@
+ +Microsoft Visual Studio Solution File, Format Version 10.00 +# Visual Studio 2008 +Project("{FAE04EC0-301F-11D3-BF4B-00C04F79EFBC}") = "RepoDirectoryMigrator", "RepoDirectoryMigrator\RepoDirectoryMigrator.csproj", "{AFC5BBEB-4CA4-4AEA-8449-95B66478AC29}" +EndProject +Global + GlobalSection(SolutionConfigurationPlatforms) = preSolution + Debug|x86 = Debug|x86 + Release|x86 = Release|x86 + EndGlobalSection + GlobalSection(ProjectConfigurationPlatforms) = postSolution + {AFC5BBEB-4CA4-4AEA-8449-95B66478AC29}.Debug|x86.ActiveCfg = Debug|x86 + {AFC5BBEB-4CA4-4AEA-8449-95B66478AC29}.Debug|x86.Build.0 = Debug|x86 + {AFC5BBEB-4CA4-4AEA-8449-95B66478AC29}.Release|x86.ActiveCfg = Release|x86 + {AFC5BBEB-4CA4-4AEA-8449-95B66478AC29}.Release|x86.Build.0 = Release|x86 + EndGlobalSection + GlobalSection(SolutionProperties) = preSolution + HideSolutionNode = FALSE + EndGlobalSection +EndGlobal
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
@@ -0,0 +1,30 @@
+using System; +using System.IO; +using Microsoft.Win32; + +namespace RepoDirectoryMigrator +{ + class Program + { + static void Main(string[] args) + { + var repoRoot = (string)Registry.GetValue(@"HKEY_LOCAL_MACHINE\SOFTWARE\Fog Creek Software\Kiln", "KilnRepositoryRoot", null); + if (string.IsNullOrEmpty(repoRoot)) + { + Console.Error.WriteLine("KEY NOT FOUND!"); + Environment.Exit(1); + } + var repositories = Directory.GetDirectories(repoRoot, "????????-????-????-????-????????????"); + foreach (var path in repositories) + { + var repo = Path.GetFileName(path); + var part1 = Path.Combine(repoRoot, repo.Substring(0, 2)); + var part2 = Path.Combine(part1, repo.Substring(2, 2)); + Directory.CreateDirectory(part1); + Directory.CreateDirectory(part2); + Directory.Move(path, Path.Combine(part2, repo)); + } + Console.Error.WriteLine("SUCCESS!"); + } + } +}
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
@@ -0,0 +1,36 @@
+using System.Reflection; +using System.Runtime.CompilerServices; +using System.Runtime.InteropServices; + +// General Information about an assembly is controlled through the following +// set of attributes. Change these attribute values to modify the information +// associated with an assembly. +[assembly: AssemblyTitle("RepoDirectoryMigrator")] +[assembly: AssemblyDescription("")] +[assembly: AssemblyConfiguration("")] +[assembly: AssemblyCompany("Microsoft")] +[assembly: AssemblyProduct("RepoDirectoryMigrator")] +[assembly: AssemblyCopyright("Copyright © Microsoft 2011")] +[assembly: AssemblyTrademark("")] +[assembly: AssemblyCulture("")] + +// Setting ComVisible to false makes the types in this assembly not visible +// to COM components. If you need to access a type in this assembly from +// COM, set the ComVisible attribute to true on that type. +[assembly: ComVisible(false)] + +// The following GUID is for the ID of the typelib if this project is exposed to COM +[assembly: Guid("47cb10cb-cc59-438e-b866-e7b6eebcbab0")] + +// Version information for an assembly consists of the following four values: +// +// Major Version +// Minor Version +// Build Number +// Revision +// +// You can specify all the values or you can default the Build and Revision Numbers +// by using the '*' as shown below: +// [assembly: AssemblyVersion("1.0.*")] +[assembly: AssemblyVersion("1.0.0.0")] +[assembly: AssemblyFileVersion("1.0.0.0")]
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
 
@@ -0,0 +1,66 @@
+<?xml version="1.0" encoding="utf-8"?> +<Project ToolsVersion="3.5" DefaultTargets="Build" xmlns="http://schemas.microsoft.com/developer/msbuild/2003"> + <PropertyGroup> + <Configuration Condition=" '$(Configuration)' == '' ">Debug</Configuration> + <Platform Condition=" '$(Platform)' == '' ">AnyCPU</Platform> + <ProductVersion>9.0.30729</ProductVersion> + <SchemaVersion>2.0</SchemaVersion> + <ProjectGuid>{AFC5BBEB-4CA4-4AEA-8449-95B66478AC29}</ProjectGuid> + <OutputType>Exe</OutputType> + <AppDesignerFolder>Properties</AppDesignerFolder> + <RootNamespace>RepoDirectoryMigrator</RootNamespace> + <AssemblyName>RepoDirectoryMigrator</AssemblyName> + <TargetFrameworkVersion>v2.0</TargetFrameworkVersion> + <FileAlignment>512</FileAlignment> + </PropertyGroup> + <PropertyGroup Condition=" '$(Configuration)|$(Platform)' == 'Debug|AnyCPU' "> + <DebugSymbols>true</DebugSymbols> + <DebugType>full</DebugType> + <Optimize>false</Optimize> + <OutputPath>bin\Debug\</OutputPath> + <DefineConstants>DEBUG;TRACE</DefineConstants> + <ErrorReport>prompt</ErrorReport> + <WarningLevel>4</WarningLevel> + </PropertyGroup> + <PropertyGroup Condition=" '$(Configuration)|$(Platform)' == 'Release|AnyCPU' "> + <DebugType>pdbonly</DebugType> + <Optimize>true</Optimize> + <OutputPath>bin\Release\</OutputPath> + <DefineConstants>TRACE</DefineConstants> + <ErrorReport>prompt</ErrorReport> + <WarningLevel>4</WarningLevel> + </PropertyGroup> + <PropertyGroup Condition=" '$(Configuration)|$(Platform)' == 'Debug|x86' "> + <DebugSymbols>true</DebugSymbols> + <OutputPath>bin\x86\Debug\</OutputPath> + <DefineConstants>DEBUG;TRACE</DefineConstants> + <DebugType>full</DebugType> + <PlatformTarget>x86</PlatformTarget> + <ErrorReport>prompt</ErrorReport> + </PropertyGroup> + <PropertyGroup Condition=" '$(Configuration)|$(Platform)' == 'Release|x86' "> + <OutputPath>bin\x86\Release\</OutputPath> + <DefineConstants>TRACE</DefineConstants> + <Optimize>true</Optimize> + <DebugType>pdbonly</DebugType> + <PlatformTarget>x86</PlatformTarget> + <ErrorReport>prompt</ErrorReport> + </PropertyGroup> + <ItemGroup> + <Reference Include="System" /> + <Reference Include="System.Data" /> + <Reference Include="System.Xml" /> + </ItemGroup> + <ItemGroup> + <Compile Include="Program.cs" /> + <Compile Include="Properties\AssemblyInfo.cs" /> + </ItemGroup> + <Import Project="$(MSBuildToolsPath)\Microsoft.CSharp.targets" /> + <!-- To modify your build process, add your task inside one of the targets below and uncomment it. + Other similar extension points exist, see Microsoft.Common.targets. + <Target Name="BeforeBuild"> + </Target> + <Target Name="AfterBuild"> + </Target> + --> +</Project> \ No newline at end of file
 
 
Change 1 of 1 Show Entire File kiln.wsgi Stacked
 
1
 
 
 
2
3
4
5
6
7
8
9
10
 
11
12
13
14
15
16
17
18
19
20
21
 
 
 
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
 
 
1
2
3
4
5
6
 
 
 
 
7
8
9
10
11
12
13
14
 
15
 
 
 
 
 
16
17
18
19
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
20
@@ -1,43 +1,20 @@
 #!/usr/bin/env python +import site +site.addsitedir('/home/kiln/virtualenv/kiln25/lib/python2.6/site-packages') +  import os  import sys -import urllib -import urllib2 - -from django.core.handlers.wsgi import WSGIHandler    OUR_ROOT = os.path.abspath(os.path.dirname(__file__))  os.environ['HGENCODING'] = 'utf8' +os.environ['TEMP'] = '/home/kiln/data/tmp'  paths = (OUR_ROOT, os.path.join(OUR_ROOT, 'kiln'))  for path in paths:   if path not in sys.path:   sys.path.append(path) - os.environ['DJANGO_SETTINGS_MODULE'] = 'kiln.settings'   -class KilnWSGIHandler(WSGIHandler): - def report_exception(self, e): - def get_stack_trace(): - import traceback - return '\n'.join(traceback.format_exception(*sys.exc_info())) +from kiln.api import handlers +from kiln.versionmiddleware import VersionMiddleware +from kiln.errorloggingmiddleware import ErrorLoggingMiddleware   - bug = {'ScoutUserName': 'BugzScout', - 'ScoutProject': 'Kiln', - 'ScoutArea': 'Backend', - 'Description': str(e), - 'Extra': get_stack_trace()} - - try: - urllib2.urlopen('http://our.fogbugz.com/scoutSubmit.asp', urllib.urlencode(bug)) - except: - pass - - def __call__(self, environ, start_response): - if 'kiln.tempdir' in environ: - os.environ['TMPDIR'] = environ['kiln.tempdir'] - try: - return super(KilnWSGIHandler, self).__call__(environ, start_response) - except Exception, e: - self.report_exception(e) - raise - -application = KilnWSGIHandler() +application = ErrorLoggingMiddleware(VersionMiddleware(handlers.app))
Change 1 of 1 Show Entire File kiln/​__init__.py Stacked
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
@@ -0,0 +1,12 @@
+# legacy imports +from redis.client import Redis, ConnectionPool +from redis.exceptions import RedisError, ConnectionError, AuthenticationError +from redis.exceptions import ResponseError, InvalidResponse, InvalidData + +__version__ = '2.0.0' + +__all__ = [ + 'Redis', 'ConnectionPool', + 'RedisError', 'ConnectionError', 'ResponseError', 'AuthenticationError' + 'InvalidResponse', 'InvalidData', + ]
Change 1 of 1 Show Entire File kiln/​api/​__init__.py Stacked
 
1
 
 
 
 
1
2
@@ -1,1 +1,2 @@
- +import handlers +import queuestats
 
46
47
48
49
 
50
51
52
53
 
 
 
 
 
54
55
56
 
61
62
63
64
 
65
66
67
 
70
71
72
 
73
74
75
76
77
 
81
82
83
 
 
 
84
85
86
 
46
47
48
 
49
50
51
52
 
53
54
55
56
57
58
59
60
 
65
66
67
 
68
69
70
71
 
74
75
76
77
78
 
79
80
81
 
85
86
87
88
89
90
91
92
93
@@ -46,11 +46,15 @@
 import string  import os  import shutil -from mercurial import commands, extensions, util, bdiff +from mercurial import bdiff, commands, extensions, store, util  from mercurial.context import filectx  from mercurial.node import nullrev  from mercurial.i18n import _ -from mercurial.store import hybridencode + +CACHEPATH = 'annotations/' + +def hybridencode(f): + return store._hybridencode(f, lambda path: store._auxencode(path, True))    class annotationcache(object):   ''' Provides access to the cache of file annotations. @@ -61,7 +65,7 @@
  access. A cache file is line-oriented where each line is an   n-tuple of strings separated by the separator character ':'.   - If the file has any ancestor with a different name, then we + If the file has any ancestor with a different name, then we   append .f or .n depending on whether or not we followed the   annotation history to these ancestors. Otherwise a generic   cache is created which works for either case. @@ -70,8 +74,8 @@
  def __init__(self, repo, follow = True):   ''' Create a new annotations cache for the given repository '''   self.followflag = follow and 'f' or 'n' + self._opener = repo.opener   self.cachepath = repo.join("annotations") - self.opener = util.opener(self.cachepath)   self.sepchar = ':'     # fdcache caches information about existing files: @@ -81,6 +85,9 @@
  # fdcache[path] does not exist if the file state is unknown   self.fdcache = {}   + def opener(self, path, *args, **kwargs): + return self._opener(CACHEPATH + path, *args, **kwargs) +   def makepath(self, filectx):   ''' Computes the path to the cache for the given file revision. '''   relpath = os.path.join('data', filectx.path())
Change 1 of 1 Show Entire File kiln/​api/​emptyui.py Stacked
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
@@ -0,0 +1,44 @@
+# Copyright (C) 2008-2010 Fog Creek Software. All rights reserved. +# +# This software may be used and distributed according to the terms of the +# GNU General Public License version 2, incorporated herein by reference. + +from mercurial import ui +import traceback +from bugzscout import report_error + +class emptyui(ui.ui): + def __init__(self, src=None, suppressoutput=True): + super(emptyui, self).__init__(src) + if isinstance(src, emptyui): + self.suppressoutput = src.suppressoutput + else: + self.suppressoutput = suppressoutput + + if self.suppressoutput: + self.pushbuffer() + + # Wrap the ui's write functions because writing to stdout causes an exception. + # Save the output using a buffer and create a bug from it later (essentially + # catch the error then report it). + def write_err(self, *args, **opts): + return self.write(*args, **opts) + + def write(self, *args, **opts): + super(emptyui, self).write(*args, **opts) + if self.suppressoutput: + if len(self._buffers) == 1: + super(emptyui, self).write('\n'.join(traceback.format_stack()) + '\n') + + def __del__(self): + if self.suppressoutput: + buffer = self.popbuffer() + if buffer: + report_error('Mercurial output error.', buffer) + try: + super(emptyui, self).__del__() + except AttributeError: + pass + + def readconfig(self, *args, **kwargs): + pass
 
3
4
5
 
 
6
7
8
 
13
14
15
 
16
17
 
 
 
 
 
18
19
20
 
25
26
27
28
 
29
30
 
31
32
 
 
 
33
34
35
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
 
 
 
3
4
5
6
7
8
9
10
 
15
16
17
18
19
20
21
22
23
24
25
26
27
28
 
33
34
35
 
36
37
 
38
39
 
40
41
42
43
 
 
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
201
202
203
204
 
205
206
@@ -3,6 +3,8 @@
 # This software may be used and distributed according to the terms of the  # GNU General Public License version 2, incorporated herein by reference.   +import difflib +import re  from pygments import highlight  from pygments.lexers import get_lexer_for_filename, guess_lexer_for_filename, TextLexer  from pygments.formatters import HtmlFormatter @@ -13,8 +15,14 @@
  'vbs': 'vb',   'fbp5': 'xml',   'xul': 'xml', + 'ipp': 'cpp',   'jsm': 'js'}   +LINE_MAX = 20000 + +def ensurenewline(s): + return s if s.endswith('\n') else s + '\n' +  def tweak(filename):   """change filename to a known extension, if applicable"""   (filename, extension) = filename.split('/')[-1].rsplit('.', 1) @@ -25,50 +33,174 @@
  """select an appropriate lexer based on the filename"""   try:   if content: - return guess_lexer_for_filename(tweak(filename), content, stripnl=False) + l = guess_lexer_for_filename(tweak(filename), content, stripnl=False)   else: - return get_lexer_for_filename(tweak(filename), stripnl=False) + l = get_lexer_for_filename(tweak(filename), stripnl=False)   except: - return TextLexer(stripnl=False) + l = TextLexer(stripnl=False) + l.add_filter('whitespace', spaces=True, wstokentype=False) + return l   -def highlighted(lex, code): - return highlight(code, lex, HtmlFormatter(nowrap=True)) +class IntralineHtmlFormatter(HtmlFormatter): + in_change = False + ranges = [] + + def __init__(self, ranges=None, *args, **kw): + if ranges: + self.ranges = ranges + HtmlFormatter.__init__(self, *args, **kw) + + def _split_change_markers(self, tokensource): + '''Pre-process the token stream before it is formatted, to mark the tokens that should be highlighted for intraline diffs.''' + ranges = self.ranges or [] + pos = 0 + for ttype, value in tokensource: + for value in value.splitlines(True): + l = len(value) + range = None + rr = [r for r in ranges if (r[0] <= pos <= r[1]) or (pos <= r[0] <= r[1] <= pos + l) or (r[0] <= pos + l <= r[1])] + if not rr: + yield ttype, value + pos += l + continue + last = None + for r in rr: + if r[0] <= pos: + # r starts at or before token + if r[1] <= pos + l: + # range covers prefix of token + self.in_change = True + i = r[1] - pos + yield ttype, value[:i] + self.in_change = False + else: + # range covers whole token + self.in_change = True + yield ttype, value + self.in_change = False + else: + # r starts in the middle of the token + i = last[1] - pos if last else 0 + j = r[0] - pos + yield ttype, value[i:j] + if r[1] <= pos + l: + # range covers middle chunk + self.in_change = True + i = r[0] - pos + j = r[1] - pos + yield ttype, value[i:j] + self.in_change = False + else: + # range covers suffix of token + self.in_change = True + i = r[0] - pos + yield ttype, value[i:] + self.in_change = False + last = r + if last[1] <= pos + l: + i = last[1] - pos + yield ttype, value[i:] + pos += l + + def _format_lines(self, tokensource): + return super(IntralineHtmlFormatter, self)._format_lines(self._split_change_markers(tokensource)) + + def _get_css_class(self, ttype): + return super(IntralineHtmlFormatter, self)._get_css_class(ttype) + (' ch' if self.in_change else '') + +def highlighted(lex, code, ranges=None): + return highlight(code, lex, IntralineHtmlFormatter(ranges, nowrap=True)) + +def highlight_patch(lex, lines, ranges=None): + lines = [(line[0], ensurenewline(line[1:LINE_MAX])) for line in lines] + for x in xrange(0, len(lines)): + if lines[x][0] == '\\': + lines[x] = (lines[x][0], '\n') + patch = ''.join(l[1] for l in lines) + patch = highlighted(lex, patch, ranges).splitlines(True) + for x in xrange(0, min(len(patch), len(lines))): + if lines[x][0] == '\\': + lines[x] = (lines[x][0], ' No newline at end of file\n') + else: + lines[x] = (lines[x][0], patch[x]) + return ''.join(line[0] + line[1] for line in lines) + +# returns a list of ranges (a, b), marking that characters a:b in the patch are changed. +def intraline_diff(patch): + removed_lines = [] + added_lines = [] + ranges = [] + l = 0 + + for line in patch + [' ']: + if line[0] == '-': + removed_lines.append(line[1:]) + elif line[0] == '+': + added_lines.append(line[1:]) + else: + if added_lines or removed_lines: + rtotal = sum(len(s) for s in removed_lines) + atotal = sum(len(s) for s in added_lines) + + # split the diff text into whole words and individual non-word characters + removed_words = [w for w in re.split(r'(\w+|\W)', ''.join(removed_lines)) if w] + added_words = [w for w in re.split(r'(\w+|\W)', ''.join(added_lines)) if w] + removed, added = l, l + rtotal + seq = difflib.SequenceMatcher(); + seq.set_seqs(removed_words, added_words) + + # find the matching words of each string, using the ranges in each opcode. + # 'equal' action is for non-changed text; otherwise, mark the range as changed. + for (action, r1, r2, a1, a2) in seq.get_opcodes(): + ac = ''.join(added_words[a1:a2]) + rc = ''.join(removed_words[r1:r2]) + a = len(ac) + r = len(rc) + added += a + removed += r + if action == 'equal': + continue + if a != 0: + ranges.append((added - a, added)) + if r != 0: + ranges.append((removed - r, removed)) + + l += atotal + rtotal + removed_lines = [] + added_lines = [] + l += len(line) - 1 + return sorted(ranges) + +def format(filename, diff): + if not diff: + return None + formatted = [] + patch = [] + + if not isinstance(diff, unicode): + diff_asc = diff + else: + diff_asc = diff.encode('utf-8') + diff_asc = diff_asc.replace('\r', '') + lines = diff_asc.splitlines(True) + if isinstance(diff, unicode): + lines = [l.decode('utf-8') for l in lines] + + lex = lexer(filename) + for line in lines: + if line.startswith(u'@@'): + if patch: formatted.extend(highlight_patch(lex, patch, intraline_diff(patch))) + formatted.append(line) + patch = [] + else: + patch.append(line) + if patch: formatted.extend(highlight_patch(lex, patch, intraline_diff(patch))) + return ''.join(formatted)    def format_diffs(diffs): - def highlight_patch(lex, lines): - lines = [(line[0], line[1:]) for line in lines] - for x in xrange(0, len(lines)): - if lines[x][0] == '\\': - lines[x] = (lines[x][0], '\n') - patch = ''.join(l[1] for l in lines) - patch = highlighted(lex, patch).splitlines(True) - for x in xrange(0, min(len(patch), len(lines))): - if lines[x][0] == '\\': - lines[x] = (lines[x][0], ' No newline at end of file\n') - else: - lines[x] = (lines[x][0], patch[x]) - return ''.join(line[0] + line[1] for line in lines) - - def format(filename, diff): - if not diff: - return None - formatted = [] - patch = [] - diff = diff.replace('\r', '') - lines = diff.splitlines(True) - lex = lexer(filename) - for line in lines: - if line.startswith('@@'): - if patch: formatted.extend(highlight_patch(lex, patch)) - formatted.append(line) - patch = [] - else: - patch.append(line) - if patch: formatted.extend(highlight_patch(lex, patch)) - return ''.join(formatted) -   for d in diffs:   d['formatted_diff'] = format(d['file']['name'], d['diff'])    def format_file(filename, contents): - return highlighted(lexer(filename), contents.replace('\r', '')) + lines = [line[:LINE_MAX] for line in contents.replace('\r', '').split('\n')] + return highlighted(lexer(filename), '\n'.join(lines))
 
1
 
2
3
4
5
 
 
 
 
6
7
8
9
10
11
12
 
 
 
 
 
 
13
14
15
16
17
18
19
 
 
20
 
 
 
 
 
 
 
21
22
23
24
25
26
27
28
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
29
30
31
32
33
34
35
 
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
 
 
 
142
143
144
145
146
 
 
 
 
 
 
 
 
 
 
 
 
 
147
148
149
 
 
 
 
 
 
 
 
 
 
 
 
 
 
150
151
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
152
153
154
155
156
157
 
 
 
 
 
158
159
160
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
161
162
163
164
165
166
167
168
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
191
192
193
194
195
196
197
 
 
 
 
 
 
 
 
198
199
200
 
 
201
202
 
 
 
 
 
 
 
 
 
 
 
203
204
205
 
207
208
209
210
 
211
212
213
214
215
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
216
217
 
218
219
220
 
 
 
 
 
221
222
223
224
225
226
227
228
229
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
230
231
232
233
 
 
 
 
 
 
 
 
 
 
 
 
234
235
 
 
 
236
237
238
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
239
240
241
242
243
244
245
246
247
248
249
250
251
252
253
254
255
256
257
258
259
260
261
262
263
264
265
266
267
268
269
270
271
272
273
274
275
276
277
278
279
280
281
282
283
284
285
286
287
288
289
290
291
292
293
294
295
296
297
298
299
300
301
302
303
304
305
306
307
308
309
310
311
312
313
314
315
316
317
318
319
320
321
322
323
324
325
326
327
328
329
330
331
332
333
334
335
336
337
338
339
340
341
342
343
344
345
346
347
348
349
350
351
352
353
354
355
356
357
358
359
360
361
362
363
364
365
366
367
368
369
370
371
372
373
374
375
376
377
378
379
380
381
382
383
384
385
386
387
388
389
390
391
392
393
394
395
396
397
398
399
400
401
402
403
404
405
406
407
408
409
410
411
412
413
414
415
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
 
 
 
 
 
 
11
12
13
14
15
16
17
 
 
 
 
 
 
18
19
20
21
22
23
24
25
26
27
28
 
 
 
29
 
 
 
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
 
 
 
61
62
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
63
64
65
66
67
 
 
 
68
69
70
71
72
73
74
75
76
77
78
79
80
81
 
 
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
 
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
231
232
233
234
235
236
237
238
239
240
241
242
243
244
245
246
247
248
249
250
251
252
253
254
255
256
257
258
259
260
261
262
263
264
265
266
267
268
269
270
271
272
273
274
275
276
277
278
279
280
281
282
283
 
 
 
 
284
285
286
287
288
289
 
 
290
291
292
293
294
295
296
297
298
299
300
301
302
303
304
305
306
307
 
 
 
 
 
 
 
308
309
310
311
312
313
314
315
316
317
318
319
320
321
322
323
324
325
326
327
328
329
330
331
332
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
333
334
335
336
337
338
339
340
341
342
343
344
345
346
347
348
349
350
 
 
 
 
 
 
351
352
353
354
355
356
357
358
359
360
 
361
362
363
 
364
365
366
367
368
369
370
371
372
373
374
375
376
377
 
379
380
381
 
382
383
 
 
 
 
384
385
386
387
388
389
390
391
392
393
394
395
396
397
398
399
400
401
402
403
404
405
406
407
408
409
410
411
412
413
414
415
416
417
418
419
420
421
422
423
424
425
426
427
428
429
430
431
432
433
434
435
436
437
438
439
440
441
442
443
444
445
446
447
448
449
450
451
452
453
454
455
456
457
458
459
460
461
462
463
464
465
466
467
468
469
470
471
472
473
474
475
476
477
478
479
480
481
482
483
484
485
486
487
488
489
490
491
492
493
494
495
496
497
498
499
500
501
502
503
504
505
506
507
508
509
510
511
512
513
514
515
516
517
518
519
520
521
522
523
524
525
526
527
528
529
530
531
532
533
534
535
536
537
538
539
540
541
542
543
544
545
546
547
548
549
550
551
552
553
554
555
556
557
558
559
560
561
562
563
564
565
566
567
568
569
570
571
572
573
574
575
576
577
578
579
580
581
582
583
584
585
586
587
588
589
590
591
592
593
594
595
596
597
598
599
600
601
602
603
604
605
606
607
608
609
610
611
612
613
614
615
616
617
618
619
620
621
622
623
624
625
626
627
628
629
 
630
631
 
 
632
633
634
635
636
637
 
 
 
 
 
 
 
 
638
639
640
641
642
643
644
645
646
647
648
649
650
651
652
653
654
655
656
657
658
659
 
 
 
660
661
662
663
664
665
666
667
668
669
670
671
672
 
673
674
675
676
 
 
677
678
679
680
681
682
683
684
685
686
687
688
689
690
691
692
693
694
695
696
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
697
698
699
700
701
702
703
704
705
@@ -1,205 +1,377 @@
-# Copyright (C) 2009-2010 by Fog Creek Software. All rights reserved. +# Copyright (C) 2009-2011 by Fog Creek Software. All rights reserved.  #  # This software may be used and distributed according to the terms of the  # GNU General Public License version 2, incorporated herein by reference.   +from functools import wraps +import hashlib +import os +import urllib2   -import base64 -import os -import sys -import urllib -import urllib2 -from threading import Thread +from flask import Flask, Response, request +from mercurial import hgweb, util, context +from mercurial.error import LockHeld, RepoLookupError +from werkzeug.exceptions import NotFound, BadRequest +import settings +import simplejson   -from django.conf import settings -from django.utils import simplejson -from mercurial import ui, util -from piston.emitters import Emitter -from piston.handler import AnonymousBaseHandler, typemapper -from piston.utils import rc +import Image +import cStringIO   +from bugzscout import report_exception +from encoders import EmittableEncoder +from formatter import format_diffs, format_file +from repositories import Repository, RepositoryNotSubsetException, CreatesNewHeadsException, filetuple, hexdecode, determinedisplaysize +from webtasks import asyncpost, queue_repo_index, queue_repo_create, queue_repo_strip +import bfiles +import syncstatus  import urlutil -from formatter import format_diffs, format_file -from repositories import Repository, RepositoryNotSubsetException, CreatesNewHeadsException -from repositories import filetuple, hexdecode   -class fakerequest(object): - pass -fakerequest.GET = {} +app = Flask(__name__) + +def jsonify(obj): + if isinstance(obj, Response) or isinstance(obj, basestring): + return obj + return Response(enc.encode(obj), mimetype='application/json') + +def route(url, methods=['GET'], as_json=True): + def wrapper(f): + @app.route(url, methods=methods) + @wraps(f) + def inner(*args, **kwargs): + r = f(*args, **kwargs) + if as_json: + r = jsonify(r) + return r + return inner + return wrapper + +def get(url, as_json=True): + return route(url, methods=['GET'], as_json=as_json) + +def post(url, as_json=True): + return route(url, methods=['POST'], as_json=as_json) + +def delete(url): + return app.route(url, methods=['DELETE'])    def error(message, code):   return {'type': 'error', 'message': message, 'code': code}   -def reportexception(e): - if settings.DEBUG: - return +enc = EmittableEncoder()   - def gettraceback(): - import traceback - return '\n'.join(traceback.format_exception(*(sys.exc_info()))) - - traceback = gettraceback() - bug = {'ScoutUserName': settings.FOGBUGZ_USERNAME, - 'ScoutProject': settings.FOGBUGZ_PROJECT, - 'ScoutArea': settings.FOGBUGZ_AREA, - 'Description': 'Backend exception: %s' % e, - 'Extra': traceback} - - if settings.HOSTED: - try: - urllib2.urlopen(settings.FOGBUGZ_URL, urllib.urlencode(bug)) - except: - pass - else: - from filelogmiddleware import _log_error - _log_error(bug) - -class PingbackThread(Thread): - def __init__(self, handler, method, pingback, request, args, kwargs): - super(PingbackThread, self).__init__() - self.handler = handler - self.method = method - self.pingback = pingback - self.request = request - self.args = args - self.kwargs = kwargs - - def run(self): - r = self.method(self.handler, *self.args, **self.kwargs) - emitter, mime = Emitter.get('json') - srl = emitter(r, typemapper, self.handler, self.handler.fields, True) - json = srl.render(fakerequest) - - success = False - attempts = 3 - while attempts and not success: - try: - attempts -= 1 - urllib2.urlopen(self.pingback, urllib.urlencode({'data': json.encode('utf8')})) - success = True - except urllib2.URLError, e: - if attempts == 0: - reportexception(e) - -def ping_wrapper(method): - def f(self, *args, **kwargs): - q = args[0].POST - if 'pingback' in q: - t = PingbackThread(self, method, q['pingback'], args[0], args, kwargs) - try: - t.start() - except Exception, e: - print e - return rc.ALL_OK - else: - return method(self, *args, **kwargs) - return f - -def with_pingbacks(cls): - """pingback wrapper - - This is a decorator that makes any given handler function run - asynchronously if provided with a pingback parameter in the - web request.""" - - for m in ('create', 'read', 'delete', 'update'): - if m in cls.__dict__.keys(): - method = getattr(cls, m) - method = ping_wrapper(method) - setattr(cls, m, method) - return cls - -@with_pingbacks -class RepositoryHandler(AnonymousBaseHandler): - allowed_methods = ('GET', 'POST', 'DELETE',) - model = Repository - fields = ('uuid', 'parent',) - - def read(self, request, uuid=None): - if uuid: - r = Repository(uuid) - if r.exists(): - return r - return rc.NOT_FOUND - if not settings.HOSTED: - return rc.BAD_REQUEST - return [Repository(folder) - for folder in os.listdir(settings.KILN_REPOSITORY_ROOT) - if Repository(folder).exists()] - - def create(self, request): - q = request.POST - uuid = q['uuid'] - meta = simplejson.loads(q['meta']) if q.get('meta') else {} - if q.get('parent'): - r = Repository(q['parent']).cloneto(uuid, meta) - else: - r = Repository(uuid) - r.create(meta) - return r - - def delete(self, request, uuid): +@get('/repo/<uuid>') +def repo_get(uuid=None): + if uuid:   r = Repository(uuid)   if r.exists(): - r.delete() - return rc.DELETED - return rc.NOT_FOUND + return r + raise NotFound + if not settings.HOSTED: + raise BadRequest + repos = [Repository(folder) + for folder in os.listdir(settings.KILN_REPOSITORY_ROOT) + if Repository(folder).exists()] + for p1 in os.listdir(settings.KILN_REPOSITORY_ROOT): + if len(p1) == 2: + for p2 in os.listdir(os.path.join(settings.KILN_REPOSITORY_ROOT, p1)): + parent = os.path.join(settings.KILN_REPOSITORY_ROOT, p1, p2) + repos.extend(Repository(folder) for folder in os.listdir(parent) if Repository(folder).exists()) + return repos   -class ManifestHandler(AnonymousBaseHandler): - allowed_methods = ('GET',) +@post('/repo') +def repo_create(): + q = request.form + try: + uuid = q['uuid'] + pingback = q['pingback'] + site = urlutil.siteurl(request) + meta = q.get('meta', None) + parent = q.get('parent', None) + except Exception, e: + raise + return BadRequest(e) + queue_repo_create(uuid, pingback, site, meta=meta, parent=parent) + return 'OK'   - def read(self, request, uuid, rev='tip'): +@post('/repo/<uuid>') +def update_meta(uuid): + q = request.form + try: + meta = simplejson.loads(q['meta']) if q.get('meta') else {} + except: + raise BadRequest + + r = Repository(uuid) + if not r.exists(): + raise NotFound + r.meta = meta + return r + +@delete('/repo/<uuid>') +def repo_delete(uuid): + # This can only ever be called manually, so it's okay that + # this key is never used on the website side. If we do ever + # add repository purging via heartbeat or whatever, this + # will obviously need to change + if settings.HOSTED: + if request.args.get('magic_word') != settings.WHITE_RABBIT_OBJECT: + raise BadRequest + r = Repository(uuid) + if r.exists(): + r.delete() + syncstatus.remove_repo(r) + return Response('', status=204) + raise NotFound + +@post('/repo/<uuid>/commit') +def commit(uuid): + q = request.form + author = q['author'] + parent = q['parent'] + date = q['date'] + message = q['message'] + path = hexdecode(q['path']) + upload = request.files['file'] + if upload.content_length > settings.KILN_MAX_COMMIT_FILE_SIZE: + return error('The uploaded file is too large.', 'too_large') + data = upload.read() + if hasattr(upload, 'close'): + upload.close() + + def _writefile(repo, mctx, path): + return context.memfilectx(path, data) + + r = Repository(uuid) + if not r.exists(): + raise NotFound + + repo = r.repo + l = None + try: + l = repo.lock() + except LockHeld: + if l: l.release() + return error('The repository is locked.', 'repo_locked') + try: + try: + ctx = repo[parent] + if ctx.children(): + return error('Commit creates new head!', 'not_head') + except RepoLookupError: + raise NotFound + mctx = context.memctx(repo, [parent, None], message, [path], _writefile, user=author, date=date) + mctx.commit() + except Exception, e: + report_exception(e) + raise + finally: + if l: l.release() + + return Response('OK') + +@post('/repo/stripped') +def strip(): + q = request.form + uuid = q['uuid'] + parent = q['parent'] + pingback = q['pingback'] + rev = q['rev'] + url = q['url'] + ixPerson = q['ixperson'] + meta = q.get('meta', '') + parent = q['parent'] + if not Repository(parent).exists(): + raise NotFound + queue_repo_strip(pingback, uuid, parent, rev, meta, url, ixPerson) + return Response('OK') + +@get('/repo/<uuid>/manifest/<rev>') +def manifest(uuid, rev='tip'): + r = Repository(uuid) + if not r.exists(): + raise NotFound + if not r.hasrevision(rev): + raise BadRequest + return {'type': 'manifest', 'manifest': r.manifest(rev)} + +@get('/repo/<uuid>/size') +def size(uuid): + r = Repository(uuid) + if not r.exists(): + # raise NotFound + # Hack around a dumb bug in ourdot's Kiln install + return {'type': 'reposize', 'size': 0} + return {'type': 'reposize', 'size': r.size()} + +@get('/repo/<uuid>/commontag') +def common_tags(uuid): + """ + This function takes a list of checkins within a repository and + will return the nearest common child which has a tag. + """ + r = Repository(uuid) + if not 'revs' in request.args or not r.exists(): + raise BadRequest + else: + revs = request.args['revs'].split(","); + + if not 'num_tags' in request.args: + num_tags = 1 + else: + num_tags = int(request.args['num_tags']) + tags = r.commontags(revs, num_tags); + + return {'type': 'tags', 'tags': tags}; + +@post('/repo/<uuid>/tag/<rev>') +def create_tag(uuid, rev='tip'): + r = Repository(uuid) + if not r.exists(): + raise NotFound + try: + tag = request.form['tag'] + ixPerson = request.form['ixPerson'] + url = request.form['url'] + username = request.form['username'] + except KeyError: + raise BadRequest + force = False + if 'force' in request.form and request.form['force'].lower() != 'false': + force = True + try: + r.tag(rev, tag, url, ixPerson, username, force) + except ValueError: + raise BadRequest + + return {'type': 'tag', 'tag': tag, 'rev': rev} + +@get('/repo/<uuid>/tag') +def get_tags(uuid): + r = Repository(uuid) + if not r.exists(): + raise NotFound + return {'type': 'tags', 'tags': r.tags()} + +@get('/repo/<uuid>/changesbetweentags') +def betweentags(uuid): + r = Repository(uuid) + if not r.exists(): + raise NotFound + try: + tag1 = request.args["tag1"] + tag2 = request.args["tag2"] + except KeyError: + raise BadRequest + + try: + changesetlist = r.changesbetweentags(tag1, tag2, request.args.get('includelow', 'false').lower() == 'true') + return {'type': 'changesets', 'changesets': changesetlist} + except: + raise BadRequest + +@post('/repo/meta') +def set_meta(): + '''Takes a JSON dictionary of repo uuid => repo metadata, at the key + 'meta', and updates the metadata for those repos. Returns a dictionary + of uuid => boolean, with True for repos that were found and False for + repos that do not exist.''' + exists = {} + meta = simplejson.loads(request.form['meta']) + for uuid in meta:   r = Repository(uuid)   if not r.exists(): - return rc.NOT_FOUND - if not r.hasrevision(rev): - return rc.BAD_REQUEST - return {'type': 'manifest', 'manifest': r.manifest(rev)} + exists[uuid] = False + continue + exists[uuid] = True + r.meta = simplejson.loads(meta[uuid]) + return exists   -class SizeHandler(AnonymousBaseHandler): - allowed_methods = ('GET',) +@get('/repo/<uuid>/file/<rev>/') +@get('/repo/<uuid>/file/<rev>/<path:path>') +def get_file(uuid, path='', rev='tip'): + r = Repository(uuid) + binaries = int(request.args.get('binaries', 0)) + images = int(request.args.get('images', 0)) + can_truncate = not int(request.args.get('no_truncate', 0)) + no_contents = int(request.args.get('no_contents', 0)) + path = hexdecode(path) + if not r.exists(): + raise NotFound + if not r.hasrevision(rev): + raise BadRequest + if r.hasfile(path, rev): + return filecontents(r, path, rev, binaries, images, can_truncate, no_contents) + else: + return directorylisting(r, path, rev)   - def read(self, request, uuid): - r = Repository(uuid) - if not r.exists(): - # return rc.NOT_FOUND - # Hack around a dumb bug in ourdot's Kiln install - return {'type': 'reposize', 'size': 0} - return {'type': 'reposize', 'size': r.size()} +@get('/repo/<uuid>/file/<rev1>/<rev2>/<path:path>') +def get_subtracted_image(uuid, path='', rev1='tip', rev2='tip'): + r = Repository(uuid) + path = hexdecode(path) + if not r.exists(): + raise NotFound + if not r.hasrevision(rev1) or not r.hasrevision(rev2): + raise BadRequest + if r.hasfile(path, rev1) and r.hasfile(path, rev2): + #open the old and new versions of the image in RGB mode, and resize them so that the largest dimension is 300px. + oldcontents = Image.open(cStringIO.StringIO(r.filecontents(path, rev1, raw=1))) + oldcontents = resizeimage(oldcontents, displaySize=tuple(determinedisplaysize(oldcontents.size, max=(500, 500)))).convert("RGB") + newcontents = resizeimage(Image.open(cStringIO.StringIO(r.filecontents(path, rev2, raw=1))), displaySize=tuple(determinedisplaysize(oldcontents.size, max=(500, 500)))).convert("RGB") + sub = subtractimages(oldcontents, newcontents) + + #im = Image.new("RGB", (oldcontents.size[0]*3, oldcontents.size[1])) + #im.paste(oldcontents, (0,0, sub.size[0], sub.size[1])) + #im.paste(sub, (sub.size[0],0, sub.size[0]*2, sub.size[1])) + #im.paste(newcontents, (sub.size[0]*2,0,sub.size[0]*3,sub.size[1])) + im = sub + + output = cStringIO.StringIO() + im.save(output, "PNG") + return Response(output.getvalue())   -class FileHandler(AnonymousBaseHandler): - allowed_methods = ('GET',) - - def read(self, request, uuid, path, rev='tip'): - r = Repository(uuid) - binaries = int(request.GET.get('binaries', 0)) - can_truncate = not int(request.GET.get('no_truncate', 0)) - path = hexdecode(path) - if not r.exists(): - return rc.NOT_FOUND - if not r.hasrevision(rev): - return rc.BAD_REQUEST - if r.hasfile(path, rev): - return self.filecontents(r, path, rev, binaries, can_truncate) - else: - return self.directorylisting(r, path, rev) - - def filecontents(self, repo, path, rev, binaries, can_truncate): - truncated = False - contents = repo.filecontents(path, rev) - ft = filetuple(path) +def filecontents(repo, path, rev, binaries, images, can_truncate, no_contents): + truncated = False + ft = filetuple(path) + if repo.isbfile(path) and not binaries: + try: + Image.open(cStringIO.StringIO(repo.filecontents(path, rev, raw=True))) + filetype = 'image' + contents = '(Image file)' + except IOError: + filetype = 'binary' + contents = '(Binary file)' + elif no_contents: + contents = '' + truncated = True + filetype = 'text' + else: + contents = repo.filecontents(path, rev, raw=binaries)   if util.binary(contents): - if binaries: - filetype = "base64" - contents = base64.b64encode(contents) - else: - filetype = 'binary' - contents = '(Binary file)' + if not binaries: + try: + Image.open(cStringIO.StringIO(repo.filecontents(path, rev, raw=True))) + filetype = 'image' + contents = '(Image file)' + except IOError: + filetype = 'binary' + contents = '(Binary file)'   else:   filetype = 'text' - if len(contents) > 300000 and can_truncate: + truncate_length = 200000 + if len(contents) > truncate_length and can_truncate:   truncated = True - contents = contents[:300000] + contents = contents[:truncate_length] + if binaries: + if images: + try: + imfile = cStringIO.StringIO() + resizeimage(Image.open(cStringIO.StringIO(contents))).save(imfile, "PNG") + contents = imfile.getvalue() + except IOError: + pass + return Response(contents) + else:   return {'type': 'file',   'path': ft['path'],   'bytepath': ft['bytepath'], @@ -207,209 +379,327 @@
  'filetype': filetype,   'truncated': truncated,   'contents': contents, - 'formatted_contents': format_file(path, contents) if can_truncate else None} + 'formatted_contents': format_file(path, contents) if not truncated else None}   - def directorylisting(self, repo, path, rev): - files = repo.directorylisting(path, rev) - if files == None: - return rc.NOT_FOUND +def resizeimage(image, displaySize=None): + if displaySize == None: + displaySize = tuple(determinedisplaysize(image.size)) + if image.size == displaySize: + return image + else: + return image.resize(displaySize) + +def subtractimages(oldimage, newimage): + im = Image.new("RGB", oldimage.size) + pix = im.load() + npix = newimage.load() + opix = oldimage.load() + for x in xrange(oldimage.size[0]): + for y in xrange(newimage.size[1]): + pix[x, y] = abs(npix[x, y][0] - opix[x, y][0]), abs(npix[x, y][1] - opix[x, y][1]), abs(npix[x, y][2] - opix[x, y][2]) + pix[x, y] = leahhighlight(pix[x,y]) + return im + +def andrewdifference(pix): + pix = f(pix[0]),f(pix[1]),f(pix[2]) + return pix + +def f(x): + return int(256*pow((float(x)/256),.5)) + +def leahhighlight(pix): + if pix[0] >= 18 and pix[1] >= 18 and pix[2] >= 18: + pix = 5* pix[0],10* pix[1],5* pix[2] + return pix + +def directorylisting(repo, path, rev): + files = repo.directorylisting(path, rev) + if files == None: + raise NotFound + else: + return {'type': 'files', 'files': files} + +@get('/repo/<uuid>/annotate/<rev>/<path:path>') +def annotate(uuid, path, rev): + r = Repository(uuid) + path = hexdecode(path) + if not r.exists() or not r.hasfile(path, rev): + raise NotFound + contents = r.filecontents(path, rev) + if util.binary(contents): + return error('Unable to annotate binary files', 'annotate_binary') + + if request.args.get('line'): + return linehistory(r, path, rev, + int(request.args['line']), int(request.args.get('count', 4))) + else: + return filehistory(r, path, rev, int(request.args.get('count', 0))) + +def linehistory(r, path, rev, line, count): + return {'type': 'changesets', 'changesets': r.annotateline(path, rev, line, count)} + +def filehistory(r, path, rev, count): + return {'type': 'annotation', 'annotation': r.annotate(path, rev, count=count)} + +@get('/repo/<uuid>/branches') +def branches(uuid): + r = Repository(uuid) + if not r.exists(): + raise NotFound + return r.branches() + +@post('/repo/<uuid>/changeset') # For many changesets, e.g. reviews. +@get('/repo/<uuid>/changeset/<revs>') +@get('/repo/<uuid>/changeset/<revs>/<filename>') +def changesets(uuid, revs=None, filename=None): + r = Repository(uuid) + if not r.exists(): + raise NotFound + + if request.method == 'POST': + revs = request.form['revs'] + filename = request.form.get('filename', None) + + if filename: + filename = hexdecode(filename) + + changedfiles = request.values.get('changedfiles') + + revs = revs.split(':') + try: + if len(revs) == 1: + # did you instead give us an enumeration of individual changesets? + revs = revs[0].split(',') + if len(revs) == 1: + # Only one changeset, allow for file changesets + if filename: + return dict(r.filechangeset(filename, revs[0]), type='filechangeset') + else: + return dict(r.changeset(revs[0], changedfiles), type='changeset') + else: + # multiple changesets + return {'type': 'changesets', + 'changesets': r.changesets(revs, changedfiles)} + elif len(revs) == 2: + if filename: + limit = int(request.values.get('limit', 0)) + return {'type': 'filechangesets', + 'filechangesets': r.filechangesets(filename, revs[0], revs[1], limit)} + else: + return {'type': 'changesets', + 'changesets': r.changesetrange(revs[0], revs[1], changedfiles)} + except: + raise BadRequest + +@get('/repo/<uuid>/diff/<revs>') +@get('/repo/<uuid>/diff/<revs>/<filename>') +def diff(uuid, revs, filename=None): + r = Repository(uuid) + if filename: + filename = hexdecode(filename) + + if not r.exists(): + raise NotFound + revs = revs.split(':') + for rev in revs: + if not r.hasrevision(rev): + raise BadRequest + + # Set maxsize to 80 kB or as requested, unless it's a single file, + # in which case serve 200kb. The value of 80 kB was + # lovingly determined by trial and error. If you change it, + # please remember at least to do the former. + maxsize = int(request.args.get('maxsize') or (200 if filename else 80) * 1000) + + ignorews = request.args.get('ignorews', 'False').lower() == 'true' + + opts = dict(filename=filename, maxsize=maxsize, ignorews=ignorews) + if len(revs) > 1: + opts['rev2'] = revs[1] + udiff, bytecount = r.diff(revs[0], **opts) + format_diffs(udiff) + + if filename: + if udiff: + return udiff[0] + return {'type': 'diff'} + else: + return {'type': 'diffs', + 'truncated': bytecount - maxsize > 0, + 'diffs': udiff} + +@get('/repo/<uuid>/outgoing/<uuid2>') +def outgoing_get(uuid, uuid2): + r1 = Repository(uuid) + r2 = Repository(uuid2) + nochangesets = int(request.args.get('nochangesets', 0)) + if not r1.exists(): + raise NotFound + if not r2.exists(): + raise BadRequest + if not r1.isrelated(r2): + return error('repositories are not related', 'notrelated') + return {'type': 'outgoing', 'newheads': r1.pushwouldmakeheads(r2), 'changesets': [] if nochangesets else r1.outgoing(r2)} + +@post('/repo/<uuid>/outgoing/<uuid2>') +def push_repo(uuid, uuid2): + r1 = Repository(uuid) + r2 = Repository(uuid2) + ixPerson = request.form['ixPerson'] + url = request.form['website'] + if not r1.exists(): + raise NotFound + if not r2.exists(): + raise BadRequest + if not r1.isrelated(r2): + return error('repositories are not related', 'notrelated') + if not r1.outgoing(r2): + return error("repositories were already sync'd", 'alreadysyncd') + try: + return {'type': 'push', 'success': r1.push(r2, url, pusher=ixPerson)} + except RepositoryNotSubsetException, e: + return error(str(e), 'notstrictsubset') + except CreatesNewHeadsException, e: + return error(str(e), 'newheads') + +@post('/sync') +def sync(): + if not settings.HOSTED: + raise BadRequest + remote = request.form["remote"] + if 'repo' not in request.form: + # We don't have a specific repo, so we'll trigger a sync to every repo that needs it. + repos = syncstatus.need_sync(remote) + for repo in repos: + asyncpost(request.base_url, dict(remote=remote, repo=repo)) + return dict(type='sync', success=True, count=len(repos)) + resp = urllib2.urlopen(urlutil.urljoin(remote, "repo/%s" % request.form['repo'])) + repo = simplejson.loads(resp.read()) + failures = [] + relink = False + r = Repository(repo['uuid'], suppresshooks=True) + if not r.exists(): + r.create(repo['meta']) + relink = True + r.meta = repo['meta'] + if 'bfile' in request.form: + try: + sha = request.form['bfile'] + if bfiles.ishash(sha) and not bfiles.instore(sha): + resp = urllib2.urlopen(urlutil.urljoin(remote, 'repo', r.uuid, 'bfile', sha)) + bfiles.storebfile(resp, sha) + except Exception, e: + failures.append({'repo': repo['uuid'], 'exception': e}) + report_exception(e) + else: + remoteurl = urlutil.urljoin(remote, 'repo', r.uuid) + try: + r.pull(remoteurl) + if settings.DO_INDEXING: + queue_repo_index(repo['uuid']) + if settings.HOSTED: + syncstatus.update_status(r) + if relink: + r.relink() + # Chain the sync along + r.sync(site=urlutil.siteurl(request), peers=dict(r.ui.configitems('post_peers'))) + except LockHeld, e: + # No need to report locked repos. They're expected. + failures.append({'repo': repo['uuid'], 'exception': e}) + except Exception, e: + failures.append({'repo': repo['uuid'], 'exception': e}) + report_exception(e, "uuid=%s, r.repo['tip'].rev()=%s, request.form=%s\n" + % (repo['uuid'], str(r.repo['tip'].rev()), str(request.form))) + d = {'type': 'sync', 'success': not failures} + if failures: + d['failures'] = failures + return d + +@get('/version') +def version(): + return {'version': settings.KILN_BACKEND_VERSION, 'hg_version': util.version()} + +@app.route('/repo/<uuid>/bfile', methods=['GET', 'POST']) +@app.route('/repo/<uuid>/bfile/<sha>', methods=['GET', 'POST']) +def bfilehandle(uuid, sha=None): + repo = Repository(uuid) + if not sha: + if request.method == 'GET': + return Response(simplejson.dumps(bfiles.listbfiles()))   else: - return {'type': 'files', 'files': files} + raise BadRequest   -class AnnotationHandler(AnonymousBaseHandler): - allowed_methods = ('GET',) + if request.method == 'GET': + try: + return Response(bfiles.bfilecontents(sha)) + except IOError: + raise NotFound   - def read(self, request, uuid, path, rev): - r = Repository(uuid) - path = hexdecode(path) - if not r.exists() or not r.hasfile(path, rev): - return rc.NOT_FOUND - contents = r.filecontents(path, rev) - if util.binary(contents): - return error('Unable to annotate binary files', 'annotate_binary') + # bfiles uses PUT to upload files but django read the entire file into memory + # use POST instead so that we can access the file with a generator + # NOTE: This may no longer be necessary with flask, but it's the way it works + # so there's no reason to change it back right now. + elif request.method == 'POST': + try: + if bfiles.instore(sha): + return Response(status=200) + elif bfiles.storebfile(request.files['name'], sha): + try: + repo.sync(site=urlutil.siteurl(request), + bfile=sha, + peers=dict(repo.ui.configitems('peers'))) + finally: + return Response(status=201) + else: + #SHA1 is checked by storebfile + raise BadRequest('SHA1 of file does not match SHA1 given.') + except Exception, e: + report_exception(e) + raise BadRequest   - if request.GET.get('line'): - return self.linehistory(r, path, rev, - int(request.GET['line']), int(request.GET.get('count', 4))) + elif request.method == 'HEAD': + if bfiles.instore(sha): + m = hashlib.sha1() + with bfiles.bfilecontents(sha) as fd: + while True: + data = fd.read(32768) + if not data: + break + m.update(data) + response = Response() + response.headers['Content-SHA1'] = m.hexdigest() + return response   else: - return self.filehistory(r, path, rev) + raise NotFound + else: + raise BadRequest   - def linehistory(self, r, path, rev, line, count): - return {'type': 'changesets', 'changesets': r.annotateline(path, rev, line, count)} +@app.route('/repo/<uuid>/serve', methods=['GET', 'POST']) +def serve(uuid): + r = Repository(uuid, suppressoutput=False) + if not r.exists(): + raise NotFound + repo = r.repo + if 'ixPerson' in request.args: + repo.ui.setconfig('kiln', 'ixperson', request.args['ixPerson']) + repo.ui.setconfig('kiln', 'url', request.args['website']) + repo.ui.setconfig('kiln', 'site', urlutil.siteurl(request)) + repo.ui.setconfig('kiln', 'token', request.args.get('token', '')) + # if we're about to push, run recover. Don't do this for pull, + # because it locks the repo (even if only for a second), and it's + # obviously better if we don't have to wait for a push to finish + # to pull + if request.args['cmd'] == 'unbundle': + r.recover() + request.environ['REPO_NAME'] = request.environ['PATH_INFO'].strip('/') + return hgweb.hgweb(repo.root, baseui=repo.ui)   - def filehistory(self, r, path, rev): - return {'type': 'annotation', 'annotation': r.annotate(path, rev)} - -class ChangesetHandler(AnonymousBaseHandler): - allowed_methods = ('GET', 'POST') - - def create(self, request, uuid): - revs = request.POST["revs"] - filename = request.POST.get("filename") - return self.read(request, uuid, revs, filename) - - def read(self, request, uuid, revs, filename=None): - r = Repository(uuid) - if filename: - filename = hexdecode(filename) - - if not r.exists(): - return rc.NOT_FOUND - - changedfiles = request.REQUEST.get('changedfiles') - - revs = revs.split(':') - try: - if len(revs) == 1: - # did you instead give us an enumeration of individual changesets? - revs = revs[0].split(',') - if len(revs) == 1: - # Only one changeset, allow for file changesets - if filename: - return dict(r.filechangeset(filename, revs[0]), type='filechangeset') - else: - return dict(r.changeset(revs[0], changedfiles), type='changeset') - else: - # multiple changesets - return {'type': 'changesets', - 'changesets': r.changesets(revs, changedfiles)} - elif len(revs) == 2: - if filename: - limit = int(request.REQUEST.get('limit', 0)) - return {'type': 'filechangesets', - 'filechangesets': r.filechangesets(filename, revs[0], revs[1], limit)} - else: - return {'type': 'changesets', - 'changesets': r.changesetrange(revs[0], revs[1], changedfiles)} - except: - return rc.BAD_REQUEST - -class DiffHandler(AnonymousBaseHandler): - allowed_methods = ('GET',) - - def read(self, request, uuid, revs, filename=None): - r = Repository(uuid) - if filename: - filename = hexdecode(filename) - - if not r.exists(): - return rc.NOT_FOUND - revs = revs.split(':') - for rev in revs: - if not r.hasrevision(rev): - return rc.BAD_REQUEST - - # Set maxsize to 100 kB or as requested, unless it's a single file, - # in which case serve the whole thing. The value of 80 kB was - # lovingly determined by trial and error. If you change it, - # please remember at least to do the former. - maxsize = request.GET.get('maxsize') if not filename else None - if maxsize: - maxsize = int(maxsize) - elif not filename: - maxsize = 80 * 1000 - - if len(revs) == 1: - udiff, bytecount, increment = r.diff(revs[0], filename=filename, maxsize=maxsize) - format_diffs(udiff) - else: - udiff, bytecount, increment = r.diff(revs[0], rev2=revs[1], filename=filename, maxsize=maxsize) - format_diffs(udiff) - - if filename: - return udiff[0] if udiff else rc.NOT_FOUND - else: - return {'type': 'diffs', - 'truncated': bytecount - maxsize > 0, - 'increment': increment, - 'diffs': udiff} - -class AutopullHandler(AnonymousBaseHandler): - allowed_methods = ('POST',) - - def create(self, request, uuid, uuid2): - r1 = Repository(uuid) - r2 = Repository(uuid2) - url = request.POST['website'] - if not r1.exists(): - return rc.NOT_FOUND - if not r2.exists(): - return rc.BAD_REQUEST - if not r1.isrelated(r2): - return error('repositories are not related', 'notrelated') - if not r2.outgoing(r1): - return error("repositories were already sync'd", 'alreadysyncd') - return {'type': 'push', 'success': r1.autopull(r2, url)} - -@with_pingbacks -class OutgoingHandler(AnonymousBaseHandler): - allowed_methods = ('GET', 'POST') - - def read(self, request, uuid, uuid2): - r1 = Repository(uuid) - r2 = Repository(uuid2) - nochangesets = int(request.GET.get('nochangesets', 0)) - if not r1.exists(): - return rc.NOT_FOUND - if not r2.exists(): - return rc.BAD_REQUEST - if not r1.isrelated(r2): - return error('repositories are not related', 'notrelated') - return {'type': 'outgoing', 'newheads': r1.pushwouldmakeheads(r2), 'changesets': [] if nochangesets else r1.outgoing(r2)} - - def create(self, request, uuid, uuid2): - r1 = Repository(uuid) - r2 = Repository(uuid2) - ixPerson = request.POST.get('ixPerson') - url = request.POST['website'] - if not r1.exists(): - return rc.NOT_FOUND - if not r2.exists(): - return rc.BAD_REQUEST - if not r1.isrelated(r2): - return error('repositories are not related', 'notrelated') - if not r1.outgoing(r2): - return error("repositories were already sync'd", 'alreadysyncd') - try: - return {'type': 'push', 'success': r1.push(r2, url, pusher=ixPerson)} - except RepositoryNotSubsetException, e: - return error(str(e), 'notstrictsubset') - except CreatesNewHeadsException, e: - return error(str(e), 'newheads') - -class SynchronizeHandler(AnonymousBaseHandler): - allowed_methods = ('POST',) - - def create(self, request): - if not settings.HOSTED: - return rc.BAD_REQUEST - remote = request.POST["remote"] - if 'repo' in request.POST: - resp = urllib2.urlopen(urlutil.urljoin(remote, "repo/%s/" % request.POST['repo'])) - repos = [simplejson.loads(resp.read())] - else: - resp = urllib2.urlopen(urlutil.urljoin(remote, "repo/")) - repos = simplejson.loads(resp.read()) - u = ui.ui() - u.setconfig('ui', 'quiet', 'True') - failures = [] - for repo in repos: - r = Repository(repo['uuid'], u) - remoteurl = urlutil.urljoin(remote, "repo/%s/serve" % r.uuid) - if not r.exists(): - r.create(repo['meta']) - try: - r.pull(remoteurl) - except Exception, e: - failures.append({'repo': repo['uuid'], 'exception': e}) - reportexception(e) - d = {'type': 'sync', 'success': not failures} - if failures: - d['failures'] = failures - return d - -class VersionHandler(AnonymousBaseHandler): - allowed_methods = ('GET',) - - def read(self, request): - return {'version': settings.KILN_BACKEND_VERSION} +@get('/repo/<uuid>/heads') +def get_heads(uuid): + r = Repository(uuid) + if not r.exists(): + raise NotFound + def _revtuple(rev): + "Return a (rev num, rev id) tuple from a changeset context" + return (rev.rev(), rev.hex()) + return {'heads': [_revtuple(r.repo[head]) for head in r.repo.heads()]}
Change 1 of 1 Show Entire File kiln/​api/​queuestats.py Stacked
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
@@ -0,0 +1,62 @@
+from flask import request, render_template +import settings +from handlers import get, post, jsonify +from redis import Redis +from redis.cli import RedisCli + +queue_data = dict( + lists = [ + 'kiln:queue', + 'kiln:queue:high', + 'kiln:queue:low', + 'kiln:cancelations', + 'kiln:queue:running', + 'opengrok:index:running', + 'opengrok:cancelations', + ], + zsets = [ + 'opengrok:index', + ], + keys = [ + 'updaterepo:*:repo', + 'updaterepo:*:__failcount', + 'httppost:*:url', + 'httppost:*:__failcount', + ], +) + +def _get_redis(): + return Redis(host=settings.REDIS_HOST, port=settings.REDIS_PORT, db=settings.REDIS_DB) + +@get('/queuestats/', as_json=False) +def queuestats(): + r = _get_redis() + data = {} + for l in queue_data['lists']: + data[l] = r.llen(l) + for z in queue_data['zsets']: + data[z] = r.zcard(z) + for k in queue_data['keys']: + data[k] = len(r.keys(k)) + if request.headers.get('X-Requested-With', '').lower() == 'xmlhttprequest': + return jsonify(data) + return render_template('queuestats.html', data=data) + +@post('/queuestats/redis/cli/') +def cli(): + cmd = request.form['cmd'] + try: + r = RedisCli(settings.REDIS_HOST, settings.REDIS_PORT).onecmd(cmd) + except Exception, e: + r = '*** Unknown exception: %s' % e + if r is None: + r = '' + elif isinstance(r, list): + r = '\n'.join(r) + d = dict(response=r) + if isinstance(r, basestring) and (r.startswith('Error') or r.startswith('***')): + d['type'] = 'error' + else: + d['type'] = 'success' + return d +
 
4
5
6
 
 
7
8
9
10
11
 
 
 
 
 
12
13
14
 
 
15
16
 
 
 
 
 
 
17
18
 
 
 
 
19
 
20
21
22
 
24
25
26
27
 
 
 
28
29
30
31
32
33
34
 
 
 
35
36
37
 
53
54
55
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
56
57
58
 
70
71
72
73
 
74
 
 
 
 
75
76
 
77
78
79
80
81
82
 
 
 
 
 
 
83
84
85
86
87
88
 
 
 
 
89
90
91
92
93
 
94
95
96
 
117
118
119
 
 
 
120
121
122
123
124
125
 
126
127
128
 
133
134
135
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
136
137
138
139
140
 
 
 
 
 
 
 
 
 
 
 
141
142
 
143
144
145
 
147
148
149
 
150
151
152
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
153
154
155
 
173
174
175
176
177
178
179
180
181
182
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
183
184
185
186
187
188
 
189
190
191
 
193
194
195
 
196
197
198
 
200
201
202
203
204
205
206
207
208
209
210
211
212
213
 
 
 
214
215
216
 
236
237
238
239
240
 
 
241
242
243
244
245
246
247
248
 
 
249
250
251
 
254
255
256
257
258
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
259
260
261
262
263
 
264
265
 
 
 
266
267
268
269
270
271
 
272
 
 
 
 
273
274
275
 
 
276
277
278
 
284
285
286
 
 
287
288
289
 
290
291
292
 
294
295
296
297
298
299
300
301
302
303
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
304
305
306
307
308
309
310
311
 
312
313
314
315
316
317
 
 
 
 
318
319
 
320
321
322
 
323
324
325
326
 
 
 
 
327
328
329
 
 
 
 
330
331
332
333
334
335
 
336
337
338
 
339
340
341
 
347
348
349
350
 
351
352
 
 
 
 
 
353
354
355
356
357
 
 
358
 
359
360
361
 
370
371
372
 
373
374
 
 
375
376
 
377
378
 
 
379
380
381
382
383
384
385
386
387
388
389
390
391
392
393
 
 
 
 
 
 
 
 
 
 
 
 
394
395
396
 
 
 
 
 
 
 
397
398
399
 
 
400
401
 
402
403
404
 
429
430
431
432
 
433
434
435
436
437
438
439
440
 
 
 
 
 
 
441
442
443
444
 
445
446
 
447
448
449
450
451
452
453
454
 
 
 
 
 
 
 
 
 
455
456
457
 
470
471
472
 
473
474
475
 
499
500
501
 
502
503
504
 
505
506
507
 
509
510
511
512
 
513
514
515
 
523
524
525
526
 
527
528
529
530
531
532
533
534
535
 
 
 
 
 
 
 
 
 
 
536
537
538
539
 
 
540
541
542
543
544
545
546
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
547
548
549
550
551
552
553
554
555
 
556
557
558
 
566
567
568
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
569
570
571
 
572
573
574
 
578
579
580
581
 
582
583
584
 
596
597
598
 
 
 
599
600
601
602
603
604
605
606
 
 
607
608
609
 
615
616
617
618
 
619
 
620
621
622
623
 
 
 
 
 
 
 
 
624
625
626
 
 
627
628
629
 
4
5
6
7
8
9
10
11
12
 
13
14
15
16
17
18
 
 
19
20
21
 
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
 
40
41
42
 
43
44
45
46
47
48
49
50
51
 
52
53
54
55
56
57
 
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
 
153
154
155
 
156
157
158
159
160
161
162
 
163
164
165
166
167
168
 
169
170
171
172
173
174
175
176
177
178
 
 
179
180
181
182
183
184
 
185
 
186
187
188
189
 
210
211
212
213
214
215
216
217
218
219
220
 
221
222
223
224
 
229
230
231
232
233
234
235
236
237
238
239
240
241
242
243
244
245
246
247
248
249
250
251
252
253
254
255
256
257
258
259
260
 
 
261
262
263
264
265
266
267
268
269
270
271
272
 
273
274
275
276
 
278
279
280
281
282
283
 
284
285
286
287
288
289
290
291
292
293
294
295
296
297
298
299
300
301
302
303
304
305
306
307
 
325
326
327
 
 
 
 
 
 
 
328
329
330
331
332
333
334
335
336
337
338
339
340
341
342
343
344
345
346
347
348
349
350
351
352
353
354
355
356
357
358
359
360
361
362
363
364
365
366
367
368
369
370
371
372
373
374
375
376
377
378
379
380
 
381
382
383
384
 
386
387
388
389
390
391
392
 
394
395
396
 
 
 
 
397
398
399
 
400
401
 
402
403
404
405
406
407
 
427
428
429
 
 
430
431
432
433
434
435
436
437
 
 
438
439
440
441
442
 
445
446
447
 
 
448
449
450
451
452
453
454
455
456
457
458
459
460
461
462
463
464
465
466
467
 
468
469
 
470
471
472
473
474
475
476
 
 
477
478
479
480
481
482
483
 
 
484
485
486
487
488
 
494
495
496
497
498
499
500
 
501
502
503
504
 
506
507
508
 
 
 
 
 
 
 
509
510
511
512
513
514
515
516
517
518
519
520
521
522
523
524
525
526
527
528
529
530
531
532
533
534
535
536
537
538
539
540
541
542
543
544
545
546
547
548
549
550
551
552
553
554
555
556
557
558
559
560
561
562
563
564
565
566
567
568
569
570
571
572
573
574
575
576
577
578
579
580
581
582
583
584
585
586
587
588
 
589
590
591
592
593
594
 
 
 
595
596
597
598
599
600
601
602
603
 
604
605
606
607
 
608
609
610
611
612
613
 
614
615
616
617
618
619
620
621
622
 
623
624
625
 
626
627
628
629
 
635
636
637
 
638
639
 
640
641
642
643
644
645
 
 
 
 
646
647
648
649
650
651
652
 
661
662
663
664
665
666
667
668
669
670
671
672
673
674
675
676
 
 
 
 
 
 
 
 
 
 
 
 
 
 
677
678
679
680
681
682
683
684
685
686
687
688
689
690
 
691
692
693
694
695
696
697
698
699
 
700
701
702
 
703
704
705
706
 
731
732
733
 
734
735
736
737
738
739
740
 
 
741
742
743
744
745
746
747
748
749
 
750
751
 
752
753
754
755
756
757
 
 
 
758
759
760
761
762
763
764
765
766
767
768
769
 
782
783
784
785
786
787
788
 
812
813
814
815
816
817
 
818
819
820
821
 
823
824
825
 
826
827
828
829
 
837
838
839
 
840
841
842
843
844
845
846
 
 
 
847
848
849
850
851
852
853
854
855
856
857
858
859
860
861
862
863
 
864
865
866
867
868
869
870
871
872
873
874
875
876
877
878
879
880
881
882
883
884
885
886
887
888
889
890
891
892
 
 
 
 
 
 
893
894
895
896
 
904
905
906
907
908
909
910
911
912
913
914
915
916
917
918
919
920
921
922
923
924
925
926
927
928
929
930
931
932
933
934
935
936
937
938
939
940
941
942
943
944
945
946
947
948
949
950
951
952
953
954
955
956
957
958
959
960
961
962
963
964
965
966
967
968
969
970
971
972
973
974
975
976
977
978
979
980
981
982
983
984
985
986
987
988
989
990
991
992
993
994
995
996
997
998
999
1000
1001
1002
1003
1004
1005
1006
1007
1008
1009
1010
1011
1012
1013
1014
1015
1016
1017
1018
1019
1020
1021
1022
1023
1024
1025
1026
1027
1028
1029
1030
1031
1032
1033
1034
1035
1036
 
1037
1038
1039
1040
 
1044
1045
1046
 
1047
1048
1049
1050
 
1062
1063
1064
1065
1066
1067
1068
1069
1070
1071
1072
1073
 
 
1074
1075
1076
1077
1078
 
1084
1085
1086
 
1087
1088
1089
1090
1091
 
 
1092
1093
1094
1095
1096
1097
1098
1099
1100
1101
1102
1103
1104
1105
1106
1107
@@ -4,19 +4,35 @@
 # GNU General Public License version 2, incorporated herein by reference.     +from ConfigParser import RawConfigParser +import cStringIO  import codecs  import os  import re  import shutil -from ConfigParser import SafeConfigParser +import socket +import sys +import urllib +import urllib2 +import urlutil   -from mercurial import cmdutil, encoding, error, hg, match, patch, \ - transaction, ui, util +import Image +import simplejson   -from django.conf import settings +from mercurial import cmdutil, context, discovery, encoding, error, hg, \ + match, node, patch, pushkey, repair, subrepo, templatekw, \ + transaction, util +from hgext import relink as relinker + +import settings    from annotationcache import cachedannotate as _cachedannotate +from emptyui import emptyui +from webtasks import asyncpost, queue_repo_index +import bfiles +import tailtracking   +hex = node.hex  propertycache = util.propertycache    def cachedannotate(self, follow=False, linenumber=None): @@ -24,14 +40,18 @@
   def uc(s):   """clean the Unicode string, replacing errors""" - return unicode(s, encoding='utf8', errors='replace').lstrip(unicode(codecs.BOM_UTF8, 'utf8')) + if not isinstance(s, unicode): + return unicode(s, encoding='utf8', errors='replace').lstrip(unicode(codecs.BOM_UTF8, 'utf8')) + return s.lstrip(unicode(codecs.BOM_UTF8, 'utf8'))    def ensurenewline(s):   return s if s.endswith('\n') else s + '\n'    def hexencode(bytestr):   """converts a byte string to its hex string representation""" - return ''.join("%02X" % ord(x) for x in bytestr) + if ' ' in bytestr and bytestr.endswith('\t'): + bytestr = bytestr[:-1] + return ''.join(["%02X" % ord(x) for x in bytestr])    def hexdecode(hexstr):   """converts a hex string to the equivalent byte string""" @@ -53,6 +73,69 @@
  This is taken from Peter Norvig. I therefore assume it's safe."""   return hasattr(x, '__int__')   +def reportexception(e): + if settings.DEBUG: + return + + def gettraceback(): + import traceback + return '\n'.join(traceback.format_exception(*(sys.exc_info()))) + + traceback = gettraceback() + bug = {'ScoutUserName': settings.FOGBUGZ_USERNAME, + 'ScoutProject': settings.FOGBUGZ_PROJECT, + 'ScoutArea': settings.FOGBUGZ_AREA, + 'Description': 'Backend exception: %s' % e, + 'Extra': traceback} + + if settings.HOSTED: + try: + urllib2.urlopen(settings.FOGBUGZ_URL, urllib.urlencode(bug)) + except: + pass + else: + from errorloggingmiddleware import _log_error + _log_error(bug) + +def determinedisplaysize((width, height), max=(500,500)): + if width > height: + if width > max[0]: + return [max[0], height * max[0] / width] + else: + if height > max[1]: + return [width * max[1] / height, max[1]] + return [width, height] + +def comparemetadata(im1, im2, pending): + if determinedisplaysize(im1.size) == determinedisplaysize(im2.size): + pending['issamesize'] = True + #try to understand metadata changes, and record in pending + pending['metadiff'] = [] + + #format + if im1.format != im2.format: + format = {'type': 'format', 'oldvalue': im1.format, 'newvalue': im2.format} + pending['metadiff'].append(format) + else: + #special attributes + if im1.format in ("FLI", "FLC", "GIF") and im1.info['duration'] != im2.info['duration']: + duration = {'type': 'duration', 'oldvalue': im1.info['duration'], 'newvalue': im2.info['duration']} + pending['metadiff'].append(duration) + if im1.format in ("GIF", "PNG", "XPM") and im1.info['transparency'] != im2.info['transparency']: + transparency = {'type': 'transparency', 'oldvalue': im1.info['transparency'], 'newvalue': im2.info['transparency']} + pending['metadiff'].append(transparency) + + #size + if im1.size != im2.size: + size = {'type': 'size', + 'oldvalue': '%sx%s' % im1.size, + 'newvalue': '%sx%s' % im2.size} + pending['metadiff'].append(size) + #mode + if im1.mode != im2.mode: + mode = {'type': 'mode', 'oldvalue': im1.mode, 'newvalue': im2.mode} + pending['metadiff'].append(mode) +  class RepositoryNotSubsetException(Exception):   def __init__(self, pusher, pushee):   self.pusher = pusher @@ -70,27 +153,37 @@
  return "pushing from %s to %s would introduce new heads" % (self.pusher.uuid, self.pushee.uuid)    class Repository(object): - def __init__(self, uuid, ui=None): + def __init__(self, uuid, suppresshooks=False, suppressoutput=True, repo=None):   self.uuid = uuid + self.suppresshooks = suppresshooks + self.suppressoutput = suppressoutput + if repo is not None: + self.repo = repo   - def annotate(self, path, rev): + def annotate(self, path, rev, count=None):   """provide line annotations for the provided file"""     r = self.repo   ctxs = {}   lines = [] - for l in cachedannotate(r[rev][path], linenumber=True): + line = 0 + if count == 0: count = None + for l in cachedannotate(r[rev][path], follow=True, linenumber=True): + line = line + 1 + if count is not None and line > count: + break   rev = l[0][0].rev()   if rev not in ctxs:   ctxs[rev] = l[0][0].changectx()   ctx = ctxs[rev] - lines.append({'type': 'annotation', - 'rev': (rev, ctx.hex()), + file = None + if l[0][0].path() != path: + file = filetuple(l[0][0].path()) + lines.append({'rev': (rev, ctx.hex()),   'date': ctx.date(),   'user': ctx.user(), - 'line': uc(l[1]),   'origline': l[0][1], - 'file': filetuple(l[0][0].path())}) + 'file': file})   return lines     def annotateline(self, path, rev, linenum, count): @@ -117,12 +210,15 @@
  if rev not in ctxs:   ctxs[rev] = fctx.changectx()   ctx = ctxs[rev] + file = None + if fctx.path() != path: + file = filetuple(fctx.path())   revs[rev] = {'type': 'lineannotation',   'rev': (ctx.rev(), ctx.hex()),   'date': ctx.date(),   'user': ctx.user(),   'description': ctx.description(), - 'file': filetuple(fctx.path()), + 'file': file,   'origline': origline}   parents = fctx.parents()   if not parents: @@ -133,13 +229,48 @@
  revs.sort(lambda r1, r2: r1['rev'][0] - r2['rev'][0], reverse=True)   return revs   + def branches(self): + r = self.repo + activebranches = [r[n].branch() for n in r.heads()] + def testactive(tag, node): + realhead = tag in activebranches + open = node in r.branchheads(tag, closed=False) + return realhead and open + branches = dict([(uc(tag), + {'hex': hex(r.lookup(r.changelog.rev(node))), + 'rev': r.changelog.rev(node)}) + for tag, node in r.branchtags().items() + if testactive(tag, node)]) + return branches + + def changesbetweentags(self, tag1, tag2, includelow=False): + r = self.repo + if tag1 not in r.tags().keys() or tag2 not in r.tags().keys(): + return [] + (low, high) = [rev for rev in sorted([r[tag1], r[tag2]], lambda x,y: x.rev() - y.rev())] + lowset = set(low.ancestors()) + if not includelow: + lowset.add(low) + highset = set(high.ancestors()) + highset.add(high) + return list([self._ctx(rev, None) for rev in highset - lowset]) +   def create(self, meta):   """Create or add the repository at the specified path"""   u = self.ui - hg.repository(u, self._hg_path(), create=1) - self._add_meta(meta) + p = self._hg_path(force_new=True) + try: + os.makedirs(os.path.dirname(p)) + except: + pass + hg.repository(u, self._hg_path(force_new=True), create=1) + try: + self._add_meta(meta) + except Exception, e: + reportexception(e) + raise   - def _ctx(self, ctx, changedfiles): + def _ctx(self, ctx, changedfiles, fnrename=None):   """turn a Mercurial context into a Python dictionary"""   d = {'user': uc(ctx.user()),   'date': ctx.date(), @@ -147,9 +278,30 @@
  'description': uc(ctx.description()),   'branch': uc(ctx.branch()),   'tags': [uc(s) for s in ctx.tags()], + 'extra': dict((uc(k), uc(v)) for k, v in ctx.extra().iteritems()),   'parents': [(p.rev(), p.hex()) for p in ctx.parents()]}   if changedfiles: - d['files'] = [filetuple(f) for f in ctx.files()] + # Generate list of changed files + files = set(ctx.files()) + if node.nullid not in ctx.parents() and len(ctx.parents()) == 2: + mc = ctx.manifest() + mp1 = ctx.parents()[0].manifest() + mp2 = ctx.parents()[1].manifest() + for f in mp1: + if f not in mc: + files.add(f) + for f in mp2: + if f not in mc: + files.add(f) + for f in mc: + if mc[f] != mp1.get(f, None) or mc[f] != mp2.get(f, None): + files.add(f) + d['files'] = [filetuple(f) for f in files] + if fnrename: + rename = fnrename(ctx.hex()) + if rename: + d['rename'] = {'file': filetuple(rename[0]), + 'rev': hex(self.repo.filectx(rename[0], fileid=rename[1]).node())}   return d     def changeset(self, rev, changedfiles=False): @@ -173,19 +325,60 @@
  return [self._ctx(r[rev], changedfiles) for rev in xrange(rev1, rev2 + 1)]     def cloneto(self, uuid, meta): - "Clone this repository to the supplied UUID" - u = self.ui - path = os.path.join(settings.KILN_REPOSITORY_ROOT, uuid) - hg.clone(u, self._hg_path(), encoding.tolocal(path), update=False) - r = Repository(uuid) - r._add_meta(meta) - return r + """clone this repository to the supplied UUID""" + try: + u = self.ui + path = Repository(uuid)._hg_path(force_new=True) + try: + os.makedirs(os.path.dirname(path)) + except: + pass + hg.clone(u, {}, self._hg_path(), encoding.tolocal(path), update=False) + r = Repository(uuid) + r._add_meta(meta) + if settings.DO_INDEXING: + queue_repo_index(uuid) + return r + except Exception, e: + reportexception(e) + raise + + def commontags(self, revids, numtags): + """return the closest tag which belongs to a common child""" + revs = [self.repo[rev].rev() for rev in revids] + low = min(rev for rev in revs) + ctxs = {} + for rev in xrange(low, len(self.repo)): + ctx = self.repo[rev] + ctxs[rev] = {'children': [], 'tags': ctx.tags()} + for p in self.repo.changelog.parentrevs(rev): + if p >= low: + ctxs[p]['children'].append(rev) + mastertags = None + for rev in revs: + queue = [rev] + tags = [] + visited = set() + while queue: + rev = queue.pop() + ctx = ctxs[rev] + if ctx['tags']: + tags.append(rev) + visited.add(rev) + queue.extend(c for c in ctx['children'] if c not in visited) + if not mastertags: + mastertags = set(tags) + else: + mastertags = mastertags.intersection(tags) + mastertags = sorted(mastertags) + closesttags = sum([ctxs[rev]['tags'] for rev in sorted(mastertags)],[])[:numtags] + return closesttags     def delete(self):   """delete this repository from disk"""   shutil.rmtree(self.path(), True)   - def diff(self, rev1, rev2=None, filename=None, maxsize=None): + def diff(self, rev1, rev2=None, filename=None, maxsize=None, ignorews=False):   """diff the provided revisions, optionally specific to a filename     Mercurial internally uses git diffs, which, unfortunately, are @@ -193,6 +386,7 @@
  structured data out of Mercurial's diff system, and return it,   ready for consumption, to the calling process."""   + filemaxsize = None   gitre = re.compile('diff --git a/(.*) b/(.*)')   def axn(type, oldname=None, modes=None):   d = {'type': type, 'modes': modes} @@ -200,17 +394,14 @@
  d['oldname'] = filetuple(oldname)   return d   - def fixn(s): - """fix a diff-extracted filename""" - return s[:-2] if ' ' in s else s[:-1] -   class bytecounts(object):   def __init__(self):   self.total = 0 - self.increment = 0     def amenddiffs(patch, pending, counts): - lines = patch.splitlines(True) + # Only the first 5 lines of patch need to be seperated from the rest + # in order to compute a diff. (5 are needed for mode change + rename/copy) + lines = patch.rstrip('\n').split('\n', 4)   if lines[0].startswith('diff --git'):   # new patches start with git patches. Finish the previous   # patch and begin a new one @@ -236,16 +427,16 @@
  if lines[1].startswith('deleted file'):   pending['action'] = axn('deletion')   elif lines[1].startswith('rename'): - oldfilename = fixn(lines[1][12:]) - filename = fixn(lines[2][10:]) + oldfilename = lines[1][12:] + filename = lines[2][10:]   pending['file'] = filetuple(filename)   pending['action'] = axn('rename', oldname=oldfilename)   elif lines[1].startswith('new file'):   pending['file'] = filetuple(m.group(2))   pending['action'] = axn('creation')   elif lines[1].startswith('copy'): - oldfilename = fixn(lines[1][10:]) - filename = fixn(lines[2][8:]) + oldfilename = lines[1][10:] + filename = lines[2][8:]   pending['file'] = filetuple(filename)   pending['action'] = axn('copy', oldname=oldfilename)   else: @@ -254,25 +445,44 @@
  elif lines[0].startswith('index ') or lines[0].startswith('Binary file '):   # We cannot improve on whatever the git diff parser ripped out;   # just return what we already got - pending['diff'] = 'Binary file' - pending['file']['filetype'] = 'binary' + try: + pending['issamesize'] = False + try: + im1 = Image.open(cStringIO.StringIO(self.repo[rev1].filectx(pathjoin(pending['file']['path'], pending['file']['name'])).data())) + try: + im2 = Image.open(cStringIO.StringIO(self.repo[rev2].filectx(pathjoin(pending['file']['path'], pending['file']['name'])).data())) + comparemetadata(im1,im2, pending) + except LookupError: + pass + except LookupError: + im2 = Image.open(cStringIO.StringIO(self.repo[rev2].filectx(pathjoin(pending['file']['path'], pending['file']['name'])).data())) + pending['diff'] = 'Image file' + pending['file']['filetype'] = 'image' + except IOError, e: + pending['diff'] = 'Binary file' + pending['file']['filetype'] = 'binary'   return pending     # Normal diff   if 'action' in pending and pending['action']['type'] in ('rename', 'creation', 'copy'): - filename = fixn(lines[1][6:]) + filename = lines[1][6:]   else: - filename = fixn(lines[0][6:]) + filename = lines[0][6:] + if self.isbfilestandin(filename): + return {}   components = filename.split('/')   path = '/'.join(components[:-1])   name = components[-1]   pending['file'] = {'path': uc(path), 'name': uc(name), 'bytepath': hexencode(pathjoin(path, name))} - s = ensurenewline(uc(''.join(lines[2:]))) - counts.total += len(s) + counts.total += sum(len(l) for l in lines[2:])   if maxsize is None or maxsize >= counts.total: + s = ensurenewline(u'\n'.join([uc(l) for l in lines[2:]])) + if filemaxsize and len(s) > filemaxsize: + s = ensurenewline(s[:filemaxsize]) + pending['truncated'] = True   pending['diff'] = s - elif not counts.increment: - counts.increment = counts.total + else: + pending['truncated'] = True   return pending     r = self.repo @@ -284,9 +494,11 @@
  rev2 = r[rev2].hex()   if filename:   m = match.match(r.root, None, patterns=[filename], default='path') + filemaxsize = maxsize + maxsize = None   else:   m = match.always(r.root, None) - patches = patch.diff(r, rev1, rev2, match=m, opts=patch.diffopts(r.ui, {'git': True})) + patches = patch.diff(r, rev1, rev2, match=m, opts=patch.diffopts(r.ui, {'git': True, 'ignore_all_space': ignorews}))   diffs = []   pending = None   counts = bytecounts() @@ -294,48 +506,124 @@
  pending = amenddiffs(p, pending, counts)   if pending:   diffs.append(pending) - if r[rev1] in r[rev2].parents(): - # filter out parent revision adoption on files on merge changesets - files = [hexencode(s) for s in r[rev2].files()] - diffs = [d - for d in diffs - if d['file']['bytepath'] in files] - return (diffs, counts.total, counts.increment) + + bfilesrev1 = self.listbfiles(rev1) + bfilesrev2 = self.listbfiles(rev2) + + def bfilediff(path, rev1=None, rev2=None, action=None): + if rev1 != None: + if rev2 != None: + d = bfilemetaimagediff(path,rev1,rev2) + else: + d = bfileimagediff(path,rev1) + elif rev2 != None: + d = bfileimagediff(path,rev2) + else: + d = {'diff': 'Binary file', 'type': 'diff', + 'file': {'path': uc(os.path.dirname(path)), 'bytepath': hexencode(os.path.basename(path)), + 'name': uc(os.path.basename(path)), 'filetype': 'binary'}} + if action is not None: + d['action'] = action + return d + + def bfileimagediff(path, rev): + try: + Image.open(cStringIO.StringIO(self.filecontents(path, rev, raw=True))) + d = {'diff': 'Image file', 'type': 'diff', + 'file': {'path': uc(os.path.dirname(path)), 'bytepath': hexencode(os.path.basename(path)), + 'name': uc(os.path.basename(path)), 'filetype': 'image'}} + except IOError: + d = {'diff': 'Binary file', 'type': 'diff', + 'file': {'path': uc(os.path.dirname(path)), 'bytepath': hexencode(os.path.basename(path)), + 'name': uc(os.path.basename(path)), 'filetype': 'binary'}} + return d + + def bfilemetaimagediff(path, rev1, rev2): + try: + im1 = Image.open(cStringIO.StringIO(self.filecontents(path, rev1, raw=True))) + im2 = Image.open(cStringIO.StringIO(self.filecontents(path, rev2, raw=True))) + d = {'diff': 'Image file', 'type': 'diff', + 'file': {'path': uc(os.path.dirname(path)), 'bytepath': hexencode(os.path.basename(path)), + 'name': uc(os.path.basename(path)), 'filetype': 'image'}} + comparemetadata(im1, im2, d) + except IOError: + d = {'diff': 'Binary file', 'type': 'diff', + 'file': {'path': uc(os.path.dirname(path)), 'bytepath': hexencode(os.path.basename(path)), + 'name': uc(os.path.basename(path)), 'filetype': 'binary'}} + return d + + for bfile in bfilesrev1: + if bfile not in bfilesrev2: + diffs.append(bfilediff(bfile, rev1=rev1, action={'type': 'deletion', 'modes': None})) + elif self.getbfilesha(bfile, rev1) != self.getbfilesha(bfile, rev2): + diffs.append(bfilediff(bfile, rev1, rev2)) + + for bfile in bfilesrev2: + if bfile not in bfilesrev1: + diffs.append(bfilediff(bfile, rev2=rev2, action={'type': 'creation', 'modes': None})) + + return diffs, counts.total + + def getbfilesha(self, path, rev='tip'): + return self.repo[rev][pathjoin(bfiles.standinprefix, path)].data().strip() + + def isbfile(self, path, rev='tip'): + try: + self.repo[rev][pathjoin(bfiles.standinprefix, path)] + return True + except: + return False + + def listbfiles(self, rev='tip'): + return [s[len(bfiles.standinprefix + '/'):] + for s in self.repo[rev].manifest() + if s.startswith(bfiles.standinprefix + '/')] + + def isbfilestandin(self, path): + return path.startswith(bfiles.standinprefix + '/')     def directorylisting(self, path, rev='tip'):   """return a directory listing for the specified path     Dictionaries conforming to the Kiln backend spec are returend.""" -   r = self.repo   ctx = r[rev] + substate = subrepo.state(ctx, self.repo.ui)   if path and path[-1] != '/':   path += '/'   l = len(path) - candidates = [s[l:] - for s in ctx.manifest() - if s.startswith(path) and len(s) > l] + + candidates = [(s[l:], False) for s in ctx.manifest() if not self.isbfilestandin(s) and s.startswith(path) and len(s) > l] + candidates.extend((s[l:], True) for s in self.listbfiles() if s.startswith(path) and len(s) > l) + candidates.extend((s[l:], False) for s in substate.keys() if s.startswith(path) and len(s) > l and '/' not in s[l:])   if not candidates:   return None if path else [] +   listing = {}   ctxcache = {} - for s in candidates: + for s, isbfile in candidates:   components = s.split('/')   f = components[0]   listing[f] = {'path': uc(path), 'name': uc(f), 'bytepath': hexencode(path + f)} - if len(components) > 1: + if (path + s) in substate: + listing[f]['type'] = 'subrepo' + listing[f]['subdata'] = substate[path+s] + elif len(components) > 1:   listing[f]['type'] = 'directory'   else: - fctx = ctx[path + f] + if isbfile: + fctx = ctx[bfiles.standinprefix + '/' + path + f] + else: + fctx = ctx[path + f]   if fctx.linkrev() not in ctxcache:   ctxcache[fctx.linkrev()] = r[fctx.linkrev()]   c = ctxcache[fctx.linkrev()]   listing[f]['type'] = 'file'   listing[f]['flags'] = fctx.flags() - listing[f]['size'] = fctx.size() + listing[f]['size'] = int(bfiles.getbfilesize(self.getbfilesha(path + f, rev))) if isbfile else fctx.size()   listing[f]['rev'] = (c.rev(), c.hex())   listing[f]['date'] = c.date() - return [f for f in listing.itervalues()] + return listing.values()     def exists(self):   """Returns true if a repository exists at self.path""" @@ -347,15 +635,18 @@
  except util.Abort:   return False   - def filecontents(self, path, rev='tip'): + def filecontents(self, path, rev='tip', raw=False):   """return the complete file contents at the provided revision""" - r = self.repo + if self.isbfile(path, rev): + fd = bfiles.bfilecontents(self.getbfilesha(path, rev)) + data = fd.read() + fd.close() + return data   try: - s = r[rev].filectx(path).data() - if not util.binary(s): - return uc(s) - else: + s = self.repo[rev].filectx(path).data() + if raw or util.binary(s):   return s + return uc(s)   except:   return None   @@ -370,35 +661,46 @@
  'branch': ctx.branch(),   'tags': [uc(s) for s in ctx.tags()],   'parents': [(p.rev(), p.changectx().hex()) for p in fctx.parents()]} +   def filechangeset(self, path, rev='tip'):   """return a dictionary representing the file changeset requested""" + if self.isbfile(path, rev): + path = pathjoin(bfiles.standinprefix, path)   r = self.repo   return self._filectx(r[rev][path]) +   def filechangesets(self, path, rev1, rev2, limit=None):   """return a history of the given file between the provided revisions""" + if self.isbfile(path, rev1) or self.isbfile(path, rev2): + path = pathjoin(bfiles.standinprefix, path)   r = self.repo - rev1 = r[rev1].rev() - if rev2 == 'tip': - flog = r.file(path) - rev2 = flog.linkrev(flog.rev(flog.tip())) - rev2 = r[rev2].rev() - n = r[r[rev2][path].linkrev()][path] - queue = [n] - revs = {} - while queue and queue[-1].rev() >= rev1: - n = queue.pop() - revs[n.rev()] = n - queue.extend([p for p in n.parents() if p not in revs and p not in queue]) - queue.sort(lambda x, y: x.rev() - y.rev()) - log = sorted(revs.values(), lambda x, y: x.rev() - y.rev()) + rev = ['%s:%s' % (rev1, rev2)] + if not rev1 and not rev2: + rev = None + removed = False + # walkchangerevs is much slower with removed=True. We can make an + # imperfect assumption that we don't care about removals, except when + # getting all revs or when the file has does not exist at the tip. + # Big speedup without generally losing data. + if not rev or (path not in r[rev2].manifest()): + removed = True + matchfn = match.exact(r.root, None, [path]) + log = [c for c in cmdutil.walkchangerevs(r, matchfn, {'rev': rev, 'removed': removed}, lambda *args: None)]   if limit:   log = log[-limit:] - return [self._filectx(n) for n in log] + + # getrenamedfn answers if a given file was renamed at a given revision, in a cache-friendly way + _fnrename = templatekw.getrenamedfn(r, rev2) + def fnrename(rev): + return _fnrename(path, rev) + + return [self._ctx(c, False, fnrename) for c in log]     def hasfile(self, path, rev='tip'): - r = self.repo + if self.isbfile(path, rev): + return True   try: - r[rev][path] + self.repo[rev][path]   return True   except:   return False @@ -429,29 +731,39 @@
  """return a list of dictionaries representing the outgoing changesets"""   r1 = self.repo   r2 = r2.repo - o = r1.findoutgoing(r2) + o = discovery.findoutgoing(r1, r2)   if not o:   return []   else:   return [self._ctx(r1[ctx], False) for ctx in r1.changelog.nodesbetween(o, None)[0]]     def manifest(self, rev='tip'): - return [{'fullpath': uc(name), 'bytepath': hexencode(name)} - for name in self.repo[rev].manifest().keys()] + mani = [{'fullpath': uc(name), 'bytepath': hexencode(name)} + for name in self.repo[rev].manifest().keys() + if not self.isbfilestandin(name)] + bfs = [{'fullpath': uc(name), 'bytepath': hexencode(name)} + for name in self.listbfiles(rev)] + return mani + [d for d in bfs if d not in mani]     @property   def parent(self): - """Return the parent of this repository, based on the hgrc file""" + """return the parent of this repository, based on the hgrc file"""   try: - hgrc = SafeConfigParser() + hgrc = RawConfigParser()   hgrc.read(os.path.join(self.path(), '.hg', 'hgrc'))   return Repository(hgrc.get('paths', 'default'))   except:   return None   - def path(self): - """Return the fully qualified local path to this repository""" - return os.path.join(settings.KILN_REPOSITORY_ROOT, self.uuid) + def path(self, force_new=False): + """return the fully qualified local path to this repository""" + new_path = os.path.join(settings.KILN_REPOSITORY_ROOT, self.uuid[:2], self.uuid[2:4], self.uuid) + old_path = os.path.join(settings.KILN_REPOSITORY_ROOT, self.uuid) + if force_new or os.path.exists(new_path): + return new_path + elif os.path.exists(old_path): + return old_path + return new_path     def autopull(self, repo2, url):   """pull the other repository to this repository @@ -470,6 +782,7 @@
  r1.ui.setconfig('kiln', 'url', url)   r1.ui.setconfig('kiln', 'source', repo2.uuid)   lock = r1.lock() +   r2 = repo2.repo   try:   r1.pull(r2) @@ -499,9 +812,10 @@
  r1 = self.repo   r1.ui.pushbuffer()   success = False +   if r2.outgoing(self):   raise RepositoryNotSubsetException(self, r2) - elif r1.prepush(r2.repo, False, None)[0] is None: + elif discovery.prepush(r1, r2.repo, False, None, True)[0] is None:   raise CreatesNewHeadsException(self, r2)   else:   r2 = r2.repo @@ -509,7 +823,7 @@
  r2.ui.setconfig('kiln', 'url', url)   r2.ui.setconfig('kiln', 'source', self.uuid)   r2.ui.setconfig('kiln', 'client', 'website') - success = r1.push(r2) != 0 + success = r1.push(r2, newbranch=True) != 0   r1.ui.popbuffer()   finally:   try: @@ -523,36 +837,60 @@
  r1 = self.repo   r2 = r2.repo   r1.ui.pushbuffer() - prepush = r1.prepush(r2, False, None) + prepush = discovery.prepush(r1, r2, False, None, True)   r1.ui.popbuffer()   return prepush[0] is None and prepush[1] == 0     def pull(self, url):   """pull into this repository from another repository"""   r1 = self.repo - r2 = hg.repository(self.ui, url) - r1.recover() - return r1.pull(r2) + r2 = hg.repository(self.ui, urlutil.urljoin(url, 'serve')) + self.recover() + + for bfile in simplejson.loads(urllib2.urlopen(urlutil.urljoin(url, 'bfile')).read()): + if bfiles.ishash(bfile) and not bfiles.instore(bfile): + resp = urllib2.urlopen(urlutil.urljoin(url, 'bfile', bfile)) + bfiles.storebfile(resp, bfile) + + ret = r1.pull(r2) + return ret     def recover(self):   """recover this repo, if necessary, quietly"""   r = self.repo + r.ui.pushbuffer() + lock = r.lock()   try: - lock = r.lock()   if os.path.exists(r.sjoin('journal')):   return transaction.rollback(r.sopener, r.sjoin('journal'),   r.ui.status)   finally:   lock.release() + r.ui.popbuffer() + + def relink(self): + r = self.repo + r.ui.pushbuffer() + try: + tried = set() + # Sometimes the other repo doesn't exist on this machine, so we'll try a few of them. + for i in range(3): + other = tailtracking.random_tail(self) + if not other: + break + if other in tried: + continue + tried.add(other) + other_repo = Repository(other) + if not other_repo.exists(): + continue + relinker.relink(r.ui, r, other_repo.path()) + finally: + r.ui.popbuffer     @propertycache   def repo(self): - r = hg.repository(self.ui, self._hg_path()) - # This UI tweek must be done here, since the UI object used by the - # repo is based on, but different from, the one passed in on the - # line. - r.ui.write_err = r.ui.write - return r + return hg.repository(self.ui, self._hg_path())     def size(self):   total_size = 0 @@ -566,9 +904,137 @@
  pass   return total_size   + def strip(self, rev, url, parent): + """strip the given revision from the repository""" + self.repo.ui.setconfig('kiln', 'url', url) + self.repo.ui.setconfig('kiln', 'source', parent) + self.repo.ui.setconfig('kiln', 'client', 'website') + + node = self.repo[rev].node() + lock = None + try: + lock = self.repo.lock() + + self._removeundo() + # Turn off this hook - causes duplicates + self.repo.ui.setconfig('hooks', 'changegroup', None) + # Buffer this output + self.repo.ui.pushbuffer() + # Remove without backup, since this should only be called on a clone. + repair.strip(self.repo.ui, self.repo, node, None) + self.repo.ui.popbuffer() + # strip may have unbundled a set of backed up revisions after + # the actual strip + self._removeundo() + self.repo.ui.setconfig('hooks', 'changegroup', 'python:kilnhook.changehook') + finally: + lock.release() + # Trigger the strip hook (changehook) to get a pingback + self.repo.hook('strip', node=0) + + def _removeundo(self): + """From the mq extension.""" + undo = self.repo.sjoin('undo') + if not os.path.exists(undo): + return + try: + os.unlink(undo) + except OSError, inst: + pass + + def tag(self, rev, name1, url, ixPerson, username, force, *names): + """give the given revision a tag + + name1 is the first tag, and names is an optional list of additional + tags + """ + self.repo.ui.setconfig('kiln', 'url', url) + self.repo.ui.setconfig('kiln', 'ixperson', ixPerson) + self.repo.ui.setconfig('kiln', 'username', username) + + names = (name1,) + names + if len(names) != len(set(names)): + raise ValueError('Tags must be unique') + + for n in names: + if n in ('tip', '.', 'null'): + raise ValueError("the name '%s' is reserved" % n) + if not force and n in self.repo.tags(): + raise ValueError("tag '%s' already exists " % n) + + allchars = ''.join(names) + for c in self.repo.tag_disallowed: + if c in allchars: + raise ValueError('%r cannot be used in a tag name' % c) + + node = self.repo[rev].node() + if '.hgtags' in self.repo['tip']: + data = self.repo['tip']['.hgtags'].data() + else: + data = "" + + tags = cStringIO.StringIO() + tags.write(data) + + if data and data[-1] != '\n': + tags.write('\n') + for name in names: + if force: + tags.write('0000000000000000000000000000000000000000 %s\n' % encoding.fromlocal(name)) + tags.write('%s %s\n' % (hexencode(node), encoding.fromlocal(name))) + + mfctx = context.memfilectx(".hgtags", tags.getvalue(), False, False, None) + message = ('Added tag %s for changeset %s' % + (', '.join(names), hexencode(node[:6]).lower())) + + # each memctx expects a function which maps the repository, + # the current memctx object and a path to a file into a + # filectx object. Since we only ever change .hgtags we use the + # "constant" lambda function which always returns mfctx + mctx = context.memctx(self.repo, (hexencode(self.repo['tip'].node()), None), + message, (".hgtags",), + lambda x, y, z: mfctx, user=username) + + self.repo.ui.pushbuffer() + tagnode = self.repo.commitctx(mctx) + + for name in names: + self.repo.hook('tag', node=hexencode(node), tag=name, local=False) + self.repo.hook('commit', node=hexencode(tagnode), parent1=hexencode(self.repo['tip'].node())) + self.repo.ui.popbuffer() + + return tagnode + + def tags(self): + bookmarks = [tag for tag in pushkey.list(self.repo, 'bookmarks').keys()] + tags = [{'tag': tag, + 'rev': (self.repo[node].rev(), self.repo[node].hex()), + 'bookmark': True} + for (tag, node) + in pushkey.list(self.repo, 'bookmarks').items() + if node in self.repo] + tags.extend( + [{'tag': tag, + 'rev': (self.repo[node].rev(), self.repo[node].hex()), + 'bookmark': False} + for (tag, node) + in self.repo.tags().iteritems() + if tag not in bookmarks and node in self.repo]) + return tags + + def sync(self, site, peers=None, bfile=None): + """syncs repository with other backends""" + hostname = socket.gethostname() + if peers and hostname in peers: + for peer in peers[hostname]: + data = {'remote': site, 'repo': self.uuid} + if bfile: + data['bfile'] = bfile + asyncpost(urlutil.urljoin(peer, 'sync'), data) +   def _hgrc_get(self):   try: - hgrc = SafeConfigParser() + hgrc = RawConfigParser()   hgrc.read(self._hgrc_path())   ini = {}   for section in hgrc.sections(): @@ -578,7 +1044,7 @@
  except:   return {}   def _hgrc_set(self, ini): - hgrc = SafeConfigParser() + hgrc = RawConfigParser()   for section in ini:   hgrc.add_section(section)   for key, val in ini[section].iteritems(): @@ -596,14 +1062,17 @@
  self.hgrc = hgrc   meta = property(_meta_get, _meta_set)   + def meta_deleted(self): + return self.meta.get('deleted', 'false').strip().lower() == 'true' +   def _add_meta(self, meta):   m = self.meta   for key in meta:   m[key] = meta[key]   self.meta = m   - def _hg_path(self): - return encoding.tolocal(self.path()) + def _hg_path(self, force_new=False): + return encoding.tolocal(self.path(force_new))     def _hgrc_path(self):   return os.path.join(self.path(), '.hg', 'hgrc') @@ -615,15 +1084,24 @@
    @propertycache   def ui(self): - u = ui.ui() + u = emptyui(suppressoutput=self.suppressoutput)   u.setconfig('ui', 'quiet', 'True') + u.setconfig('extensions', 'hgext.bookmarks', '')   u.setconfig('web', 'allow_push', '*')   u.setconfig('web', 'push_ssl', 'False') - u.setconfig('hooks', 'changegroup', 'python:kilnhook.hook') - u.setconfig('hooks', 'commit', 'python:kilnhook.hook') + u.setconfig('web', 'allow_archive', 'zip,gz') + u.setconfig('server', 'validate', '1') + if not self.suppresshooks: + u.setconfig('hooks', 'changegroup', 'python:kilnhook.changehook') + u.setconfig('hooks', 'commit', 'python:kilnhook.changehook') + u.setconfig('hooks', 'strip', 'python:kilnhook.changehook') + u.setconfig('hooks', 'pretxnchangegroup', 'python:kilnhook.prechangehook') + u.setconfig('hooks', 'pretxncommit', 'python:kilnhook.prechangehook')   if settings.HOSTED:   for host, peer in settings.STORAGE_PEERS.iteritems():   u.setconfig('peers', host, peer) + for host, peer in settings.POST_STORAGE_PEERS.iteritems(): + u.setconfig('post_peers', host, peer)   return u     def __emittable__(self):
Change 1 of 1 Show Entire File kiln/​api/​static/​jquery.color.js Stacked
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
@@ -0,0 +1,129 @@
+/* + * jQuery Color Animations + * Copyright 2007 John Resig + * Released under the MIT and GPL licenses. + */ + +(function(jQuery){ + + // We override the animation for all of these color styles + jQuery.each(['backgroundColor', 'borderBottomColor', 'borderLeftColor', 'borderRightColor', 'borderTopColor', 'color', 'outlineColor'], function(i,attr){ + jQuery.fx.step[attr] = function(fx){ + if ( !fx.colorInit ) { + fx.start = getColor( fx.elem, attr ); + fx.end = getRGB( fx.end ); + fx.colorInit = true; + } + + fx.elem.style[attr] = "rgb(" + [ + Math.max(Math.min( parseInt((fx.pos * (fx.end[0] - fx.start[0])) + fx.start[0]), 255), 0), + Math.max(Math.min( parseInt((fx.pos * (fx.end[1] - fx.start[1])) + fx.start[1]), 255), 0), + Math.max(Math.min( parseInt((fx.pos * (fx.end[2] - fx.start[2])) + fx.start[2]), 255), 0) + ].join(",") + ")"; + } + }); + + // Color Conversion functions from highlightFade + // By Blair Mitchelmore + // http://jquery.offput.ca/highlightFade/ + + // Parse strings looking for color tuples [255,255,255] + function getRGB(color) { + var result; + + // Check if we're already dealing with an array of colors + if ( color && color.constructor == Array && color.length == 3 ) + return color; + + // Look for rgb(num,num,num) + if (result = /rgb\(\s*([0-9]{1,3})\s*,\s*([0-9]{1,3})\s*,\s*([0-9]{1,3})\s*\)/.exec(color)) + return [parseInt(result[1]), parseInt(result[2]), parseInt(result[3])]; + + // Look for rgb(num%,num%,num%) + if (result = /rgb\(\s*([0-9]+(?:\.[0-9]+)?)\%\s*,\s*([0-9]+(?:\.[0-9]+)?)\%\s*,\s*([0-9]+(?:\.[0-9]+)?)\%\s*\)/.exec(color)) + return [parseFloat(result[1])*2.55, parseFloat(result[2])*2.55, parseFloat(result[3])*2.55]; + + // Look for #a0b1c2 + if (result = /#([a-fA-F0-9]{2})([a-fA-F0-9]{2})([a-fA-F0-9]{2})/.exec(color)) + return [parseInt(result[1],16), parseInt(result[2],16), parseInt(result[3],16)]; + + // Look for #fff + if (result = /#([a-fA-F0-9])([a-fA-F0-9])([a-fA-F0-9])/.exec(color)) + return [parseInt(result[1]+result[1],16), parseInt(result[2]+result[2],16), parseInt(result[3]+result[3],16)]; + + // Look for rgba(0, 0, 0, 0) == transparent in Safari 3 + if (result = /rgba\(0, 0, 0, 0\)/.exec(color)) + return colors['transparent']; + + // Otherwise, we're most likely dealing with a named color + return colors[jQuery.trim(color).toLowerCase()]; + } + + function getColor(elem, attr) { + var color; + + do { + color = jQuery.curCSS(elem, attr); + + // Keep going until we find an element that has color, or we hit the body + if ( color != '' && color != 'transparent' || jQuery.nodeName(elem, "body") ) + break; + + attr = "backgroundColor"; + } while ( elem = elem.parentNode ); + + return getRGB(color); + }; + + // Some named colors to work with + // From Interface by Stefan Petre + // http://interface.eyecon.ro/ + + var colors = { + aqua:[0,255,255], + azure:[240,255,255], + beige:[245,245,220], + black:[0,0,0], + blue:[0,0,255], + brown:[165,42,42], + cyan:[0,255,255], + darkblue:[0,0,139], + darkcyan:[0,139,139], + darkgrey:[169,169,169], + darkgreen:[0,100,0], + darkkhaki:[189,183,107], + darkmagenta:[139,0,139], + darkolivegreen:[85,107,47], + darkorange:[255,140,0], + darkorchid:[153,50,204], + darkred:[139,0,0], + darksalmon:[233,150,122], + darkviolet:[148,0,211], + fuchsia:[255,0,255], + gold:[255,215,0], + green:[0,128,0], + indigo:[75,0,130], + khaki:[240,230,140], + lightblue:[173,216,230], + lightcyan:[224,255,255], + lightgreen:[144,238,144], + lightgrey:[211,211,211], + lightpink:[255,182,193], + lightyellow:[255,255,224], + lime:[0,255,0], + magenta:[255,0,255], + maroon:[128,0,0], + navy:[0,0,128], + olive:[128,128,0], + orange:[255,165,0], + pink:[255,192,203], + purple:[128,0,128], + violet:[128,0,128], + red:[255,0,0], + silver:[192,192,192], + white:[255,255,255], + yellow:[255,255,0], + transparent: [255,255,255] + }; + +})(jQuery);
This file's diff was not loaded because this changeset is very large. Load changes
This file's diff was not loaded because this changeset is very large. Load changes
This file's diff was not loaded because this changeset is very large. Load changes
Change 1 of 1 Show Entire File kiln/​api/​templates/​base.html Stacked
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
@@ -0,0 +1,12 @@
+<!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.0 Transitional//EN" "http://www.w3.org/TR/xhtml1/DTD/xhtml1-transitional.dtd"> +<html xmlns="http://www.w3.org/1999/xhtml" > + <head> + <meta http-equiv="X-UA-Compatible" content="IE=8" /> + <title>{% block title %}{% endblock title %}</title> + <script type="text/javascript" src="http://ajax.googleapis.com/ajax/libs/jquery/1.4.4/jquery.min.js"></script> + {% block extra_head %}{% endblock extra_head %} + </head> + <body> + {% block content %}{% endblock content %} + </body> +</html>
This file's diff was not loaded because this changeset is very large. Load changes
Show Entire File kiln/​backend.py Stacked
This file's diff was not loaded because this changeset is very large. Load changes
Show Entire File kiln/​bfiles.py Stacked
This file's diff was not loaded because this changeset is very large. Load changes
Show Entire File kiln/​bugzscout.py Stacked
This file's diff was not loaded because this changeset is very large. Load changes
Show Entire File kiln/​client.crt Stacked
This file's diff was not loaded because this changeset is very large. Load changes
Show Entire File kiln/​client.key Stacked
This file's diff was not loaded because this changeset is very large. Load changes
Show Entire File kiln/​encoders.py Stacked
This file's diff was not loaded because this changeset is very large. Load changes
Show Entire File kiln/​errorloggingmiddleware.py Stacked
This file's diff was not loaded because this changeset is very large. Load changes
Show Entire File kiln/​httpshandler.py Stacked
This file's diff was not loaded because this changeset is very large. Load changes
Show Entire File kiln/​imports.py Stacked
This file's diff was not loaded because this changeset is very large. Load changes
Show Entire File kiln/​kilnext/​__init__.py Stacked
(No changes)
Show Entire File kiln/​kilnext/​nobinary.py Stacked
This file's diff was not loaded because this changeset is very large. Load changes
Show Entire File kiln/​kilnext/​nosymlink.py Stacked
This file's diff was not loaded because this changeset is very large. Load changes
Show Entire File kiln/​kilnext/​subzero.py Stacked
This file's diff was not loaded because this changeset is very large. Load changes
Show Entire File kiln/​kilnhook.py Stacked
This file's diff was not loaded because this changeset is very large. Load changes
Show Entire File kiln/​miniredis.py Stacked
This file's diff was not loaded because this changeset is very large. Load changes
Change 1 of 1 Show Entire File kiln/​ourdot.sh Stacked
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
@@ -0,0 +1,7 @@
+#!/bin/sh + +if [ ! -e ~/miniredis.pid ]; then + nohup python miniredis.py -p 56784 -d ~/miniredis.db -l ~/miniredis.out --pid ~/miniredis.pid & +else + echo MiniRedis is running, or crashed +fi