Python API Branch
A new branch for new/experimental python API development has been opened.
Grab the branch with the command...
svn co https://svn.blender.org/svnroot/bf-blender/branches/pyapi_devel
The suggested changes need to be discussed on the mailing list.
- Accept contribution from devs that are interested (you dont have to ask) - all contributions are considered tentative until review at the Sunday meeting or on the mailing list (including mine --Ideasman42 13:40, 19 June 2007 (CEST)) .
- Keep on track by having peer reviews on list or arrange meetings to make decisions.
- This API is for Blender-2.5 or later, It will not be used in a 2.4x release.
- This API may undergo big changes - that can happen at any time.
- This API will not work with most existing scripts. (porting should not be that hard)
Branch Development Direction
- Development in this branch should follow decisions made for the bpy.* api.
- Remove Blender.* in favor of bpy.* (eventually)
- Hold off on areas that will change with the 2.5 refactor - Blender.Draw and Blender.Window, Will keep Mesh for now since BMesh may not make 2.5.
- Remove getStuff()/setStuff() methods where possible (an easy one - but removes quite a bit of code)
- add a unified way of dealing with scriptlinks and materials - mesh.materials.append() - should work. and work the same for all data types.
- same syntax for all bool types... ob.enableDupGroup / img.isLoaded /.... hasFoo etc should all use same syntax... yet to be decided. I don't really care.
Moved to its own page API Progress
Moved to its own page Meeting Minutes
This section is for discussing new ways the api might deal with blenders data that make it a pleasure to use.
Keyframe Attributes (Accepted)
Blender material.alpha - at the moment return a floating point value. But the material's alpha is not just a floating point. It has an IPO, a range and can be keyed.
We could extend the floating point type (with a subtype) to give users access to these functions whilst all other operations work as expected.
So once might do this...
mat.alpha # the floating point that we subtype mat.alpha.curve # IPO Curve, or None mat.alpha.keyframe(frame) # frame would be an optional arg. otherwise use the current frame. mat.alpha.haskey(frame) # True if the frame has a key. mat.alpha.delKey(frame) # remove the key
The advantage of this is whenever you're accessing a value- you can easily animate it. rather than having to look up the documentation and pass some argument like...
Extending to other Types
If keying floating points is accepted, it makes sense to extend this to other types, for example.
ob.loc.keyframe(), material.color.keyframe(), bone.quat.keyframe().
This is not hard to add because wrapped colors, vectors, eulers and quats already store their data type and a reference to the data they are derived from.
Then .insertIpoKey(CONST) can be removed.
- You can add a keyframe to a single value (in some cases this isn't possible at the moment without accessing the curve data directly)
- It's discoverable, A new scripter could just try ob.LocX.keyframe(): if it works that's good, otherwise that type cant be keyed rather than having to look up constants for keyable types.
- It allows better control, for example, at the moment ob.insertIpoKey() accepts constants that group multiple settings together for keying, so it means you cant just key 1 setting.
- Allows a straightforward way to add both keyframe and curve data for all keyable values
- Users may want to simply add a bunch of keyframes, like moving an object and inserting keys in blenders 3d view.
- Access to curves is for more advanced operations - like editing splines in the IPO window.
- the subtype floats would not sync with the original data (once they are assigned- their value wont update with blenders settings)
- the PyObject subtype would have an extra pointer and short - one to reference the datablock and the short to store the setting of this PyObject so it knows what to key.
- These subtypes would only be used for keyable types.
- *Note, keyable variables are only settings for that data, so it is unlikely subtyping floats would make a noticeable difference to the speed of running a script - as compared with having floats from vectors subtyped for instance
Inheriting from the Python float type here is abusing the whole concept of inheritance. Material alpha is a float - at least up until the time it has keyframes and then it becomes an IPO which is a function. At that point its value depends on the frame time and is no longer a constant.
Inheritance only makes sense when an 'isA' relation holds; when a derived class is a base class in all circumstances. Example: a Dog class can be derived from a Mammal class because dogs are always mammals and have mammal properties.
Creating a class for attributes like material alpha is a good idea, but composition (aggregation) is a bettter tool than inheritance.
I don't understand the part about "would not sync with the original data (once they are assigned- their value wont update with blenders settings)". If you change a value via bpy, you would expect that value to be reflected in Blender. Would there be some other mechanism to change the Blender data? --Stivs 14:32, 18 August 2007 (CEST)
Data as a string - repr()/str()
Python provides 2 ways to represent data as a string. At the moment blender only uses repr() which looks a little something like this.
>>> print ob [Object "MyObject"]
The str() function is meant to return representations of values which are fairly human-readable, while repr() is meant to generate representations which can be read by the interpreter (or will force a SyntaxError if there is not equivalent syntax).
We could make printing an object return a string that can be evaluated. for example repr could return...
At the moment, printing some variable in the middle of a script (whilst looking for a bug for instance) isnt always that useful - [Timeline "Scene"] could instead return bpy.data.scenes["Scene", None].timeline, a vector bpy.types.Vector(0.0000,0.0000,0.0000)
This could give a person looking into a script a better understanding of where the data is coming from as well as being evaluated as valid python which has its minor advantages.
This sections is to discuss changes to the way Blender/Python's data is accessed and interconnects as well as naming conventions.
This is an area that makes an impact an how easy the API is to learn, and bad choices effects us later on since changing this area of the API will break scripts.
Currently the Curves have very confusing naming.
- Curves in the Curve-Object-Data object are called "Nurbs" even if they are bezier or polylines.
Nurb also refers to the curve interpolation type.
- The points in a curve are called "BezTriple" even if the curve is a nurb or a polyline.
I propose we change the terminology, from ...
Then the terms Bezier and Nurb can be used for the type of curve without any confusion.
The names *are* confusing, although they do reflect the underlying C structs.
In geometry, Line has a meaning distinct from Curve. Maybe Spline is a better choice. NURB means non-uniform rational bezier spline, after all.
A Blender Curve can consist of multiple elemements. Each element can be either a NURB curve, a Bezier curve. One type can be converted to another. Both NURBs and Beziers have their own type of control point with a different number of coordinates.
- NurbPoint or Bezpoint depending on type of Spline
I propose we make scriptlinks a subtyped list kept up to date with the real scriptlinks, and have each scriptlinks a tuple (Text, Event), I tried making them as new PyTypes but applying their changes back to the original data didn't work very well.
Tuples are not as nice to work with because modifying a scriptlink with mean you have to reassign a tuple, however scriptlinks are not used all that often anyway, and Tuples are at least easy to understand. --Ideasman42 00:04, 24 June 2007 (CEST)
Deform Groups (more pythonic representation)
This could be done with a subtyped list, which would allow the following operations.
mesh.vgroups.append('NewGroup') # add a new group. mesh.vgroups.remove('Group') # remove a group. mesh.vgroups = 'SomeGroup' # rename an existing group. mesh.vgroups[:] =  # remove all vertex groups.
At the moment, managing vertex groups is not well integrated with the rest of the mesh API.
I'm proposing we have verts (for Mesh and Lattice) dictionary style access to a verts weights and vertex groups.
Dealing with verts...
vert['Group'] = 1.0 # Assign this verts weight in 'Group' to 1.0 del vert['Group'] # remove the vert from Group for vgroup in vert.vgroups(): print vgroup # like dict.keys() for weight in vert.vweights(): print weight # like dict.values()
It may be better to add an attribute to the vertex - vert.vgroup['Group']
API Internal Workings
This section is to discuss how the API internals might work better. Resulting in an API that is easier to maintain and runs efficiently.
At the moment 'undo' will close all running user interface scripts. and also invalidate data was created before undo'ing.
This leads to possible crash's that arnt possible any other way.
This is totally unacceptable and we need to work out how python can play nice with undo.
A drastic solution is to stop all running scripts before undo runs. This would solve bad pointer problems. Then we could look at ways of getting scripts to re-run after undo so it does not cause such problems.
Another solution would be to make the Blender/Python API aware that an undo has happened and make it deal with the changes using callbacks.
Blender uses arrays and linked lists internally, for object data (that can have 10'000's of elements) - For these cases its often best to thin wrap the list to avoid building/maintaining large lists in python.
This works well but has the overhead of each DataType having a hand written PythonType, as well as documentation makes adding new types prohibitively time consuming. What has ended up happening in Blender 2.4x API is
- The PyTypes work like Pythons lists but are incomplete implementations. (users find some operations missing like ls1+ls2 or ls[:] dont work) - see object.modifiers
- A real Python list is accepted and returned, but that does not link to the original data.
mesh.material = mat and mesh.material.append(mat) do not work with 2.44 api.
For small lists
Where speed is not so important so there are some solutions to this...
(I have added this to the python-dev branch but we have not made a decision yet --Ideasman42 22:47, 24 July 2007 (CEST))
This is where blender returns a real list, that is SubTyped to contain a reference to its origin.
Before getting any data, this list will check its contents matches Blenders internal data,
After modifying the list, its changes are copied back to Blender.
In the current implementation can deal with multiple list types - materials and
- Dont have to write an entire PyType to add a new type, each type only needs...
- bpyls_update_to_list - updates the python list
- bpyls_update_from_list - updates blenders internal data from the list
- bpyls_maxlen - returns the max length of the list.
- bpyls_is_compat - checks that a a PyObject is compatible with this type - you cant add incorrect data to the list.
- Uses real lists, this means all operations act as expected without needing to look up documentation or be confronted with unexpected results.
- slow since it synchronizes the list with every operation.
- less control over each list type
Note - current implementation uses one .c file, it could be made API where each type provides its own syncing functions
I don't understand the advantage of maintaining a copy of Blender data. Every time you access the list, you need to check the Blender data. Everytime you modify the list, you have to update the Blender data. A more straightforward solution seems to make our list type simply use the actual Blender data rather than making a copy.--Stivs 13:55, 18 August 2007 (CEST)
We could use our own PyType - but document that it has only a subset of the operations a normal list has.
This could be written as one PyType with an slots for operations (its own API), or as seperate PyTypes that all have the same features. - from the script writers perspective they are similar.
- runs fast, since it wraps blenders data directly.
- no syncing between lists
- script writers only need to learn how to use one PyType
- easy to write since we dont need to implement every list operation.
- not as useful as a list
- script writers will need to remember the limited set of operatons this type has.
We just write a custom type to interface each blender data type.
- greatest flexibility.
- writing a new pytype with enough list/sequence functionality is not trivial.
- script writers need to be aware of operations each type is capable of - more looking up docs.
- when finished there is still a lot of code to maintain.
API Memory Management
The current 2.44 API has many areas where removing data while your using it will crash blender. or pointers may become invalid when data is reallocated.
This section is to discuss how these problems can be resolved.
Track all ID's?
Will all python data with IDs need to be tracked and stored in a HASH (or just the ones that can be removed)?
The advantage in using the hash of all ID types is that every type will only ever have one pyobject. This means if we need to modify it- we know changes to the PyObject will be reflected in other running scripts.
There's a problem with that is you can't for instance 2 python meshes that each link to their own object (for vertex group operations), possibly solutions are
- Making the object an attribute that can be set.
This could still be problematic if scripters don't think to set it and if other areas of their code change it.
- Adding a mesh subtype that deals with all vertex group operations, each subtype would point to its own object.
This would match current functionality best, and not be that hard to code.
Python and Object Usercounts
BPyObjects (as in a blender object) - using a id.us - this is BAD having python mess with blenders usercounts.
Can we just remove it - even with the existing API it worked alright when they did not add to id.us?
If an objects deleted we can just invalidate it using hashes (same as text and scenes).
Another option is to flag ID's with a PYUSER, since there will only ever be one PyObject referencing an object ID, we can just flag this on or off. That way if blender removes an object python is currently is using. it can check and not free the lib_block.
I think we should just have this work like Scene and Text --Ideasman42 18:01, 22 June 2007 (CEST)
EPY C/APY Docstrings
NOTE, Willian is researching an alternate method of maintaining docs
just committed an experimental C docstring/epydoc method of maintaining our python API docs. its worth discussing the options for docs. --Ideasman42 10:04, 3 July 2007 (CEST)
this is not ideal (epydocs in C are a bit of a kludge) but it as some good points and IMHO is better then what we have been doing.
- 1 set of docs, not 2
- built-in python methods are added, like getitem or __hash__
- we can do preprocessing since the epydocs are made from a script...
- better C docstrings (albeit with a bit of odd epydoc tags) - people do
use the C docstrings sometimes and at the moment they are not well checked - some have mistakes or are quite uninformative.
- extracts examples to py files so we can automate running them all.
- its more flexible, we could for instance have all examples in py files
and include them when the blender puts the docs together, this would also not bloat blender too much with really big docstrings,
- converting existing epydocs into C/epy/docstrings is not hard.
- editing large docstrings with \n\ at the end of every line is not nice.
- non-standard, uses a custom script (~200 lines) to extract and write
- workflow to writing docs is longer since you need to edit the c file,
compile blender and run the script. If there is a missing tab youd not want to do all that again. ... Realistically its not so bad since you can edit the docs that blender spits out and then update the C file once your done, but the process is still longer for writing epydocs though arguably better then editing docs in 2 places.
- one more hoop to jump through when writing docs - you have to learn
our way of adding epydocs in C.
Id rather not make it harder for new people to come in and write docs, so far python scripters have not come up and offered to write docs, instead its been the C/API authors - who are already familiar with the C files, it also seems that very few people besides the API devs even use epydoc or have it installed so Im not too worried about this.
Have a look at Group.c to see what the epy/docstrings look like.
If we decide this is acceptable I can continue and move other finished areas of the new API to epy/docstrings. --Ideasman42 18:40, 9 July 2007 (CEST)
Two things bother me about this. First, the process for creating and editing seems overly complicated and painful.
The second and more important is that the doc strings and the on-line docs serve two completely different roles and require different content.
The docstrings provide quick usage hints like the arguments for functions and the names of attributes. A unixy example is running "foo --help" to get the arguments for the command "foo".
On-line docs are a more complete reference to the API. Here, in addition to the names of args and attributes, you would expect more information like detailed descriptions, example code, and discussion of usage and possible pitfalls. At the least, this set of docs corresponds to unix man pages. In the best of all worlds, the on-line docs would be split into a Reference Manual and a User Guide - an organization style familiar to most anyone who has used a commercial library package.
Simply put, the docstrings are a stripped down version of the on-line doc. If you are going to try to maintain a single set of doc files, this suggests that the docstrings are generated from the on-line docs rather than the other way around. Stivs 16:57, 15 July 2007 (CEST)