- 12 Mar, 2015 1 commit
-
-
Romain Courteaud authored
The configuration is done on the WebSite level. No custom JS code is needed.
-
- 11 Mar, 2015 10 commits
-
-
Kirill Smelkov authored
This reverts commit 193f5cdd, reversing changes made to 4ee61a23. Jean-Paul suggested we first better further review our code review / merging procedures and use e.g. this particular merge request as a basis for that. Thus I'm reverting this merge, so people could study and re-merge it "properly". ~~~~ Please note: I could potentially rewrite master history so that there would be a no merge commit at all, as e.g. my branch was not so far ever merged at all. In contrast to draft branches, that is however a not good practice to rebase, and thus rewrite, master history - what has been committed is committed and we only continue. So later to re-merge my branch, if it will not be changed, we'll need to first revert this revert (see [1] for rationale), or alternatively I could re-prepare my patches with different sha1 ids and they will merge again the "usual way". [1] http://git-scm.com/docs/howto/revert-a-faulty-merge.html
-
Romain Courteaud authored
-
Romain Courteaud authored
-
Romain Courteaud authored
-
Romain Courteaud authored
The configuration is done on the Web Site level, to reduce the need to write custom JS code.
-
Romain Courteaud authored
-
Romain Courteaud authored
-
Romain Courteaud authored
Ensure that the data format returned is stable. By default, imit the number of documents returned by the search. Do not return document properties outside the form rendering. Minor bug fixes.
-
Kazuhiko Shiozaki authored
-
Kirill Smelkov authored
For my current project I needed to rework BigFile interface to support on-server data appension. But while doing so I've discovered in many places current BigFile code could crash and is not working properly. Thus this patch series contain: 1) fixes for discovered bugs; 2) support for on-server data append via exposing ._dataAppend(data_chunk) method; 3) basic tests for BigFile (so that we know fixes are proper and we won't regress on the same things in the future). Newly introduced BigFile tests were verified to pass on the testrunner: https://nexedi.erp5.net/test_result_module/20150303-3789A033 https://nexedi.erp5.net/test_result_module/20150303-3789A033/7 NOTE running the whole testsuite failed because of: - erp5_test_result:testTaskDistribution failure https://nexedi.erp5.net/test_result_module/20150303-3789A033/23 . testTaskDistribution is currently failing from time-to-time even on ERP5-MASTER tests, e.g. here https://nexedi.erp5.net/test_result_module/20150226-1F4459D0, so from my point of view that should not be related to BigFile at all. Origin: http://lab.nexedi.cn/kirr/erp5 y/bigfile-fixes+append-v2 Reviewed-on: https://lab.nexedi.cn/nexedi/erp5/merge_requests/3 (= merge request !3) ~~~~~~~~ * 'y/bigfile-fixes+append-v2' of http://lab.nexedi.cn/kirr/erp5: BigFile: Basic tests bt5/erp5_big_file: Regenerate BigFile: Fix most errors reported by pyflakes BigFile: Factor out code to append data chunk to ._appendData() BigFile: .data can be BTreeData or None or (possibly non-empty) str BigFile: Factor out "get .data mtime" into helper BigFile: No need to explicitly convert btree=None -> BTreeData() in PUT
-
- 10 Mar, 2015 4 commits
-
-
Gabriel Monnerat authored
-
Sebastien Robin authored
-
Sebastien Robin authored
-
Sebastien Robin authored
before, the code was going through an error when no date was found for doing stock optimisation. But we do date is found, this only means that the stock is not going below the expected value, so we do not need to do any optimisation
-
- 09 Mar, 2015 1 commit
-
-
Kazuhiko Shiozaki authored
so that we can customise the color easily.
-
- 06 Mar, 2015 3 commits
-
-
Kazuhiko Shiozaki authored
because we have no such generic property and this field can cause TypeError when saving.
-
Sebastien Robin authored
-
Sebastien Robin authored
next steps will be to change it to an erp5 renderjs gadget
-
- 05 Mar, 2015 3 commits
-
-
Jérome Perrin authored
-
Xiaowu Zhang authored
-
Sebastien Robin authored
-
- 04 Mar, 2015 4 commits
-
-
Sebastien Robin authored
Sort Index could be useful in various cases in simulation, at least for mrp and transformations
-
Yusei Tahara authored
Add business application category and set module group to position modules. Those are common settings for all modules.
-
Tatuya Kamada authored
-
Tatuya Kamada authored
To make enable this function, you need to overwrite ERP5Site_getJavaScriptRelativeUrlList. This means it does nothing by default even if you install this bussiness template.
-
- 03 Mar, 2015 7 commits
-
-
Kirill Smelkov authored
So far BigFile was not unit-tested and because of recent BigFile patches and fixes Romain suggested to write tests for it. We test: - working with BigFile via its public interface: * GET/PUT both in plain and with ranges variants, *.getData()/.getSize(), and * recently-introduced ._appendData() - that BigFile correctly handles situations where .data is either None or str or BTreeData and that migration automatically happens to BTreeData on append. ~~~~ Unlike common case, BigFile more directly works on REQUEST and RESPONSE (instead of plain object publishing), so to test it we need not only call methods and compare return values but first prepare proper request/response, set them up and analyze response headers and content after method invocation happened. For preparing request/response Zope provides utility Testing.makerequest() and its 2 variations but for our case they all turned out to be not flexible enough - e.g. Testing.makerequest() hardcodes request stdin=sys.stdin https://github.com/zopefoundation/Zope/blob/master/src/Testing/makerequest.py#L56 (and we need to provide it to e.g. upload via PUT), makerequest from Products.CMFCore.tests.test_CookieCrumbler hardcodes request environment http://svn.zope.org/Products.CMFCore/branches/2.2/Products/CMFCore/tests/test_CookieCrumbler.py?revision=126491&view=markup (and we need it for convenient way to set request headers), etc, so first we introduce our own makerequest-alike that 1. always redirects stdout to stringio, 2. stdin content can be specified and is processed, 3. returns actual request object (not wrapped portal). on top of that we introduce two convenience helpers GET and PUT to prepare same-named request and then a function to generally invoke a request on object and check results - i.e. given object and request, find appropriate method, call it appropriately, verify return value, http status code, response body and check asserted headers. All that in one line - to keep signal-to-noise ratio high. ~~~~ There are still some things to fix and improve: - Zope translates 308 http status code (which BigFile PUT with range query returns) to 500 because that code is experimental: https://github.com/zopefoundation/Zope/blob/master/src/ZPublisher/HTTPResponse.py#L226 https://github.com/zopefoundation/Zope/blob/master/src/ZPublisher/HTTPResponse.py#L64 - It is not clear (to me) what PUT range query should return for empty file. In HTTP/1.1 ranges are specified as both start and end inclusive so currently for empty-file case BigFile code returns "0--1" (= "0" - "-1") but that is not valid according to HTTP/1.1 spec http://www.w3.org/Protocols/rfc2616/rfc2616-sec14.html#sec14.35.1 and again, judging from spec, it is not clear how to represent range "empty". For now I've left "0--1" checked as correct, but left a note in tests this is dubiously so. - Support for 'If-Range' and multiple ranges in 'Range' headers is not tested. - There are no scalability tests, i.e. "let's write a lot of data into BigFile and see how underlying BTreeData behaves" So all it is is basic tests so we know general BigFile logic and interface work. Test is done as a "live test" under erp5_big_file bt5 as per Sebastien suggestion. Helped-by: Sebastien Robin <seb@nexedi.com>
-
Jérome Perrin authored
-
Kirill Smelkov authored
Rebuild this bt5 afresh from current ERP5.
-
Kirill Smelkov authored
$ pyflakes product/ERP5/Document/BigFile.py product/ERP5/Document/BigFile.py:27: 'getToolByName' imported but unused product/ERP5/Document/BigFile.py:180: undefined name 'DateTime' product/ERP5/Document/BigFile.py:325: local variable 'filename' is assigned to but never used product/ERP5/Document/BigFile.py:360: local variable 'data' is assigned to but never used getToolByName is not used. For DateTime we add appropriate import. data was there unused from the beginning - from 00f696ee (Allow to upload in chunk.) - for query_range we just return range = [0, current_size-1] and data is left unused. I did not remove filename in # need to return it as attachment filename = self.getStandardFilename(format=format) RESPONSE.setHeader('Cache-Control', 'Private') # workaround for Internet Explorer's bug RESPONSE.setHeader('Accept-Ranges', 'bytes') because as the comment says it tries to workaround some IE bug and I have no clue is filename needed in that case and was forgotten to be appended or it is the other way. Reviewed-by: Romain Courteaud <romain@nexedi.com>
-
Kirill Smelkov authored
So that data could be appended on server code via direct calls too. NOTE previously ._read_data() accepted data=None argument and callers were either providing it with current .data to append or None to forget old content and just add new fresh one. We could drop data=None from _read_data() signature, but leave it as is for compatibility with outside code (e.g. zope's OFS.Image.File.manage_upload() calls ._read_data(file) without any data argument and in that case file content should be recreated, not appended). On the other hand we rework our code in .PUT() so for both "new content" and "append range" in the end it always does "append" operation. For it to work this way "new content" simply truncates the file before proceeding to "append". Reviewed-by: Romain Courteaud <romain@nexedi.com>
-
Kirill Smelkov authored
Current BigFile code in many places assumes .data property is either None or BTreeData instance. But as already shown in 4d8f0c33 (BigFile: Fix non-range download of just created big file) this is not true and .data can be an str. This leads to situations where code wants to act on an str, like on a BTreeData instance, e.g. def _range_request_handler(): ... if data is not None: RESPONSE.setHeader('Last-Modified', rfc1123_date(data._p_mtime)) or def _read_data(... data=None): ... if data is None: btree = BTreeData() else: btree = data ... btree.write(...) and other places, and in all those situation we'll get AttributeError (str has neither ._p_mtime nor .write) and it will crash. ~~~~ .data can be str at least because '' is the default value for `data` field in Data property sheet. From this proposition the code could be reorganised to work in "data is either BTreeData or empty (= None or '')" But we discussed with Romain and his idea is that non empty strings have to be too supported because of compatibility reasons and because of desire to support possible future automatic File-based documents migration to BigFiles. From this perspective for BigFile the invariant thus .data is either BTreeData or str (empty or not) or None. This patch goes through whole BigFile code and corrects places to either properly support str case, or None (in e.g. computing len(data) in index_html). In _read_data() if data is previously str - that means we are appending content to this file and thus it is a good idea to first convert str (empty or not) to BTreeData and then proceed with appending. Helped-by: Vincent Pelletier <vincent@nexedi.com> Reviewed-by: Romain Courteaud <romain@nexedi.com>
-
Kirill Smelkov authored
Since .data can be BTreeData or None (or as we'll see next and str), ._p_mtime() is not always defined on it and in several places current code has branches from where to get it. Move this logic out to a separate helper and the code which needs to know mtime gets streamlined. Suggested-by: Vincent Pelletier <vincent@nexedi.com>
-
- 02 Mar, 2015 5 commits
-
-
Romain Courteaud authored
-
Romain Courteaud authored
-
Romain Courteaud authored
Move all ERP5 custom code into gadget_erp5.js. The jIO gadget can now be used by any renderJS application.
-
Romain Courteaud authored
-
Kirill Smelkov authored
Because we next pass that btree to ._read_data() and read_data() intentionally creates empty BTreeData() when btree is initially None. Reviewed-by: Romain Courteaud <romain@nexedi.com>
-
- 26 Feb, 2015 2 commits
-
-
Jérome Perrin authored
-
Kazuhiko Shiozaki authored
do not care node_category when getting aggregated payment transaction lines from a payment transaction group.
-