From b12de3f01b59af8663797b5a5955fb06976c486b Mon Sep 17 00:00:00 2001 From: "Ralf W. Grosse-Kunstleve" Date: Mon, 10 Nov 2003 20:39:13 +0000 Subject: [PATCH] additions by Niall Douglas with heavy edits by Ralf [SVN r20780] --- doc/v2/faq.html | 314 +++++++++++++++++++++++++++++++++++++++++------- 1 file changed, 270 insertions(+), 44 deletions(-) diff --git a/doc/v2/faq.html b/doc/v2/faq.html index bdf99519..8664573f 100644 --- a/doc/v2/faq.html +++ b/doc/v2/faq.html @@ -58,10 +58,25 @@
How can I wrap a function which needs to take ownership of a raw pointer?
-
Compilation takes too much time and eats too much memory! +
Compilation takes too much time and eats too much memory! What can I do to make it faster?
- -
How do I create sub-packages using Boost.Python?
+ +
How do I create sub-packages using Boost.Python?
+ +
error C2064: term does not evaluate to a function taking 2 arguments +
+ +
How do I handle void * conversion?
+ +
How can I automatically convert my custom string type to + and from a Python string?
+ +
Why is my automatic to-python conversion not being + found?
+ +
Is Boost.Python thread-aware/compatible with multiple interpreters?

@@ -87,7 +102,7 @@ And then:
 >>> def hello(s):
-...    print s 
+...    print s
 ...
 >>> foo(hello)
 hello, world!
@@ -119,7 +134,7 @@ hello, world!
     

I'm getting the "attempt to return dangling reference" error. What am I doing wrong?

That exception is protecting you from causing a nasty crash. It usually - happens in response to some code like this: + happens in response to some code like this:
 period const& get_floating_frequency() const
 {
@@ -127,7 +142,7 @@ period const& get_floating_frequency() const
       m_self,"get_floating_frequency");
 }
 
- And you get: + And you get:
 ReferenceError: Attempt to return dangling reference to object of type:
 class period
@@ -158,7 +173,7 @@ class period
       I have the choice of using copy_const_reference or
       return_internal_reference. Are there considerations that would lead me
       to prefer one over the other, such as size of generated code or memory
-      overhead? 
+      overhead?
 
       

A: copy_const_reference will make an instance with storage for one of your objects, size = base_size + 12 * sizeof(double). @@ -180,7 +195,7 @@ class period

  1. - Using the regular class_<> wrapper: + Using the regular class_<> wrapper:
     class_<std::vector<double> >("std_vector_double")
       .def(...)
    @@ -189,13 +204,13 @@ class_<std::vector<double> >("std_vector_double")
     
    This can be moved to a template so that several types (double, int, long, etc.) can be wrapped with the same code. This technique is used - in the file + in the file
    scitbx/include/scitbx/array_family/boost_python/flex_wrapper.h
    in the "scitbx" package. The file could easily be modified for - wrapping std::vector<> instantiations. + wrapping std::vector<> instantiations.

    This type of C++/Python binding is most suitable for containers that may contain a large number of elements (>10000).

    @@ -203,19 +218,19 @@ class_<std::vector<double> >("std_vector_double")
  2. Using custom rvalue converters. Boost.Python "rvalue converters" - match function signatures such as: + match function signatures such as:
     void foo(std::vector<double> const& array); // pass by const-reference
     void foo(std::vector<double> array); // pass by value
     
    - Some custom rvalue converters are implemented in the file + Some custom rvalue converters are implemented in the file
    scitbx/include/scitbx/boost_python/container_conversions.h
    This code can be used to convert from C++ container types such as std::vector<> or std::list<> to Python tuples and vice - versa. A few simple examples can be found in the file + versa. A few simple examples can be found in the file
    scitbx/array_family/boost_python/regression_test_module.cpp @@ -230,7 +245,7 @@ void foo(std::vector<double> array); // pass by value rvalue converters that convert to a "math_array" type instead of tuples. This is currently not implemented but is possible within the framework of Boost.Python V2 as it will be released in the next couple of weeks. [ed.: - this was posted on 2002/03/10] + this was posted on 2002/03/10]

    It would also be useful to also have "custom lvalue converters" such as std::vector<> <-> Python list. These converters would @@ -245,7 +260,7 @@ void foo(std::vector<double>& array) } }

- Python: + Python:
 >>> l = [1, 2, 3]
 >>> foo(l)
@@ -253,7 +268,7 @@ void foo(std::vector<double>& array)
 [2, 4, 6]
 
Custom lvalue converters require changes to the Boost.Python core library - and are currently not available. + and are currently not available.

P.S.:

@@ -270,7 +285,7 @@ cvs -d:pserver:anonymous@cvs.cctbx.sourceforge.net:/cvsroot/cctbx co scitbx
Q: I get this error message when compiling a large source - file. What can I do? + file. What can I do?

A: You have two choices:

@@ -278,7 +293,7 @@ cvs -d:pserver:anonymous@cvs.cctbx.sourceforge.net:/cvsroot/cctbx co scitbx
  • Upgrade your compiler (preferred)
  • - Break your source file up into multiple translation units. + Break your source file up into multiple translation units.

    my_module.cpp:

    @@ -292,7 +307,7 @@ BOOST_PYTHON_MODULE(my_module)
        more_of_my_module();
     }
     
    - more_of_my_module.cpp: + more_of_my_module.cpp:
     void more_of_my_module()
     {
    @@ -306,7 +321,7 @@ void more_of_my_module()
               can always pass a reference to the class_ object to a
               function in another source file, and call some of its member
               functions (e.g. .def(...)) in the auxilliary source
    -          file: 
    +          file:
     
               

    more_of_my_class.cpp:

    @@ -337,7 +352,7 @@ void more_of_my_class(class<my_class>& x)
           library that is under test, given that python code is minimal and
           boost::python either works or it doesn't. (ie. While errors can occur
           when the wrapping method is invalid, most errors are caught by the
    -      compiler ;-). 
    +      compiler ;-).
     
           

    The basic steps required to initiate a gdb session to debug a c++ library via python are shown here. Note, however that you should start @@ -421,7 +436,6 @@ Breakpoint 1, 0x1e04eff0 in python22!PyOS_Readline () from /cygdrive/c/WINNT/system32/python22.dll (gdb) # my_ext now loaded (with any debugging symbols it contains)

    -

  • Debugging extensions through Boost.Build

    @@ -429,7 +443,7 @@ Breakpoint 1, 0x1e04eff0 in python22!PyOS_Readline () "../../../tools/build">Boost.Build using the boost-python-runtest rule, you can ask it to launch your debugger for you by adding "-sPYTHON_LAUNCH=debugger" to your bjam - command-line: + command-line:
     bjam -sTOOLS=metrowerks "-sPYTHON_LAUNCH=devenv /debugexe" test
     bjam -sTOOLS=gcc -sPYTHON_LAUNCH=gdb test
    @@ -439,7 +453,7 @@ bjam -sTOOLS=gcc -sPYTHON_LAUNCH=gdb test
         commands it uses to invoke it. This will invariably involve setting up
         PYTHONPATH and other important environment variables such as
         LD_LIBRARY_PATH which may be needed by your debugger in order to get
    -    things to work right. 
    +    things to work right.
         

    Why doesn't my *= operator work?

    @@ -450,7 +464,7 @@ bjam -sTOOLS=gcc -sPYTHON_LAUNCH=gdb test operator. It always tells me "can't multiply sequence with non int type". If I use p1.__imul__(p2) instead of p1 *= p2, it successfully executes my code. What's - wrong with me? + wrong with me?

    A: There's nothing wrong with you. This is a bug in Python 2.2. You can see the same effect in Pure Python (you can learn a lot @@ -530,7 +544,7 @@ make frameworkinstall

    with virtual functions. If you make a wrapper class with an initial PyObject* constructor argument and store that PyObject* as "self", you can get back to it by casting down to that wrapper type in a thin wrapper - function. For example: + function. For example:
     class X { X(int); virtual ~X(); ... };
     X* f();  // known to return Xs that are managed by Python objects
    @@ -563,7 +577,7 @@ class_<X,X_wrap>("X", init<int>())
         runtime check that it's valid. This approach also only works if the
         X object was constructed from Python, because
         Xs constructed from C++ are of course never
    -    X_wrap objects. 
    +    X_wrap objects.
     
         

    Another approach to this requires you to change your C++ code a bit; if that's an option for you it might be a better way to go. work we've @@ -582,11 +596,13 @@ class_<X,X_wrap>("X", init<int>()) its containing Python object, and you could have your f_wrap function look in that mapping to get the Python object out.

    +
    +

    How can I wrap a function which needs to take ownership of a raw pointer?

    - Part of an API that I'm wrapping goes something like this: + Part of an API that I'm wrapping goes something like this:
     struct A {}; struct B { void add( A* ); }
     where B::add() takes ownership of the pointer passed to it.
    @@ -597,9 +613,9 @@ where B::add() takes ownership of the pointer passed to it.
     a = mod.A()
     b = mod.B()
     b.add( a )
    -del a         
    +del a
     del b
    -# python interpreter crashes 
    +# python interpreter crashes
     # later due to memory corruption.
     
    @@ -610,13 +626,13 @@ del b

    --Bruce Lowery

    - Yes: Make sure the C++ object is held by auto_ptr: + Yes: Make sure the C++ object is held by auto_ptr:
     class_<A, std::auto_ptr<A> >("A")
         ...
         ;
     
    - Then make a thin wrapper function which takes an auto_ptr parameter: + Then make a thin wrapper function which takes an auto_ptr parameter:
     void b_insert(B& b, std::auto_ptr<A> a)
     {
    @@ -627,26 +643,237 @@ void b_insert(B& b, std::auto_ptr<A> a)
         Wrap that as B.add. Note that pointers returned via manage_new_object
         will also be held by auto_ptr, so this transfer-of-ownership
    -    will also work correctly. 
    +    will also work correctly.
     
    +    

    Compilation takes too much time and eats too - much memory! What can I do to make it faster?

    + much memory! What can I do to make it faster?

    - Please refer to the Techniques - section in the tutorial. + Please refer to the Techniques + section in the tutorial.

    - -

    How do I create sub-packages using Boost.Python?

    + +
    +

    How do I create sub-packages using Boost.Python?

    - In the Techniques - section of the tutorial this topic is explored. + Please refer to the Techniques + section in the tutorial.

    - + +
    +

    error C2064: term does + not evaluate to a function taking 2 arguments

    + Niall Douglas provides these notes:

    + If you see Microsoft Visual C++ 7.1 (MS Visual Studio .NET 2003) issue + an error message like the following it is most likely due to a bug + in the compiler: +

    boost\boost\python\detail\invoke.hpp(76):
    +error C2064: term does not evaluate to a function taking 2 arguments"
    + This message is triggered by code like the following: +
    #include <boost/python.hpp>
    +
    +using namespace boost::python;
    +
    +class FXThread
    +{
    +public:
    +    bool setAutoDelete(bool doso) throw();
    +};
    +
    +void Export_FXThread()
    +{
    +    class_< FXThread >("FXThread")
    +        .def("setAutoDelete", &FXThread::setAutoDelete)
    +    ;
    +}
    +    
    + The bug is related to the throw() modifier. + As a workaround cast off the modifier. E.g.: +
    +        .def("setAutoDelete", (bool (FXThread::*)(bool)) &FXThread::setAutoDelete)
    +

    (The bug has been reported to Microsoft.)

    + +
    +

    How do I handle void * conversion?

    + Niall Douglas provides these notes:

    + For several reasons Boost.Python does not support void * as + an argument or as a return value. However, it is possible to wrap + functions with void * arguments or return values using + thin wrappers and the opaque pointer facility. E.g.: +

    // Declare the following in each translation unit
    +struct void_; // Deliberately do not define
    +BOOST_PYTHON_OPAQUE_SPECIALIZED_TYPE_ID(void_);
    +
    +void *foo(int par1, void *par2);
    +
    +void_ *foo_wrapper(int par1, void_ *par2)
    +{
    +    return (void_ *) foo(par1, par2);
    +}
    +...
    +BOOST_PYTHON_MODULE(bar)
    +{
    +    def("foo", &foo_wrapper);
    +}
    + +
    +

    How can I automatically + convert my custom string type to and from a Python string?

    + Ralf W. Grosse-Kunstleve provides these + notes:

    + Below is a small, self-contained demo extension module that shows + how to do this. Here is the corresponding trivial test: +

    import custom_string
    +assert custom_string.hello() == "Hello world."
    +assert custom_string.size("california") == 10
    + + If you look at the code you will find: + +
      +
    • A custom to_python converter (easy): + custom_string_to_python_str + +
    • A custom lvalue converter (needs more code): + custom_string_from_python_str +
    + + The custom converters are registered in the global Boost.Python + registry near the top of the module initialization function. Once + flow control has passed through the registration code the automatic + conversions from and to Python strings will work in any module + imported in the same process. + +
    #include <boost/python/module.hpp>
    +#include <boost/python/def.hpp>
    +#include <boost/python/to_python_converter.hpp>
    +
    +namespace sandbox { namespace {
    +
    +  class custom_string
    +  {
    +    public:
    +      custom_string() {}
    +      custom_string(std::string const& value) : value_(value) {}
    +      std::string const& value() const { return value_; }
    +    private:
    +      std::string value_;
    +  };
    +
    +  struct custom_string_to_python_str
    +  {
    +    static PyObject* convert(custom_string const& s)
    +    {
    +      return boost::python::incref(boost::python::object(s.value()).ptr());
    +    }
    +  };
    +
    +  struct custom_string_from_python_str
    +  {
    +    custom_string_from_python_str()
    +    {
    +      boost::python::converter::registry::push_back(
    +        &convertible,
    +        &construct,
    +        boost::python::type_id<custom_string>());
    +    }
    +
    +    static void* convertible(PyObject* obj_ptr)
    +    {
    +      if (!PyString_Check(obj_ptr)) return 0;
    +      return obj_ptr;
    +    }
    +
    +    static void construct(
    +      PyObject* obj_ptr,
    +      boost::python::converter::rvalue_from_python_stage1_data* data)
    +    {
    +      const char* value = PyString_AsString(obj_ptr);
    +      if (value == 0) boost::python::throw_error_already_set();
    +      void* storage = (
    +        (boost::python::converter::rvalue_from_python_storage<custom_string>*)
    +          data)->storage.bytes;
    +      new (storage) custom_string(value);
    +      data->convertible = storage;
    +    }
    +  };
    +
    +  custom_string hello() { return custom_string("Hello world."); }
    +
    +  std::size_t size(custom_string const& s) { return s.value().size(); }
    +
    +  void init_module()
    +  {
    +    using namespace boost::python;
    +
    +    boost::python::to_python_converter<
    +      custom_string,
    +      custom_string_to_python_str>();
    +
    +    custom_string_from_python_str();
    +
    +    def("hello", hello);
    +    def("size", size);
    +  }
    +
    +}} // namespace sandbox::<anonymous>
    +
    +BOOST_PYTHON_MODULE(custom_string)
    +{
    +  sandbox::init_module();
    +}
    + +
    +

    Why is my automatic to-python conversion not being found?

    + Niall Douglas provides these notes:

    + If you define custom converters similar to the ones + shown above the def_readonly() and def_readwrite() + member functions provided by boost::python::class_ for + direct access to your member data will not work as expected. + This is because def_readonly("bar", &foo::bar) is + equivalent to: + +

    .add_property("bar", make_getter(&foo::bar, return_internal_reference()))
    + + Similarly, def_readwrite("bar", &foo::bar) is + equivalent to: + +
    .add_property("bar", make_getter(&foo::bar, return_internal_reference()),
    +                     make_setter(&foo::bar, return_internal_reference())
    + + In order to define return value policies compatible with the + custom conversions replace def_readonly() and + def_readwrite() by add_property(). E.g.: + +
    .add_property("bar", make_getter(&foo::bar, return_value_policy<return_by_value>()),
    +                     make_setter(&foo::bar, return_value_policy<return_by_value>()))
    + +
    +

    Is Boost.Python thread-aware/compatible with multiple interpreters?

    + Niall Douglas provides these notes:

    + The quick answer to this is: no.

    +

    + The longer answer is that it can be patched to be so, but it's + complex. You will need to add custom lock/unlock wrapping of every + time your code enters Boost.Python (particularly every virtual + function override) plus heavily modify + boost/python/detail/invoke.hpp with custom unlock/lock + wrapping of every time Boost.Python enters your code. You must + furthermore take care to not unlock/lock when Boost.Python + is invoking iterator changes via invoke.hpp.

    +

    + There is a patched invoke.hpp posted on the C++-SIG + mailing list archives and you can find a real implementation of all + the machinery necessary to fully implement this in the TnFOX + project at this + SourceForge project location.

    +
    -

    Revised +

    Revised - 18 March, 2003 + 10 November, 2003

    @@ -655,4 +882,3 @@ void b_insert(B& b, std::auto_ptr<A> a) Rights Reserved.

    -