Instrumentation SDK

Customization involves adding hooks from our public SDK to your code so that you can take advantage of additional filtering capabilities on the dashboard, change how requests are traced, or capture additional information during the trace.

Getting Started

The instrumentation SDK is provided by the Python agent as module methods and decorators, so using it is as simple as installing the agent then importing it into your application code and using the provided SDK. Read on for some common example usages.

Custom Spans

Python instrumentation creates some spans by default, e.g., ‘django’, ‘wsgi’, ‘pylibmc’, which more or less map to the components you’ll find in the support matrix. If these spans don’t provide enough visibility, you can further divide your code into sub-spans. How you segment your application into spans is entirely up to you. For example out-of-the-box instrumentation might not recognize calls to external processes; or, maybe there’s a subsection of your code that functions as a discrete service without being external to the process. In any case, Python instrumentation offers two facilities for manually creating spans: one is a decorator, for use when you want to represent a particular function as a span; the other is an SDK method that’s better used when the block of code isn’t neatly contained.

Custom span via method decorator

To create a custom span using the decorator:

@appoptics_apm.log_method('slow_thing')
def my_possibly_slow_method(...):

Custom span via SDK methods

There’s a convention that must be followed when defining a new span: the logging call which marks the entry into a new span must be labeled ‘entry’; the logging call which marks the exit from the user-defined span must be labeled ‘exit’.

The SDK provides appoptics_apm.log_entry and appoptics.log_exit methods that report a single “entry” or “exit” event. An example:

# start span with optional key-value pairs to report
appoptics_apm.log_entry('my_span', {'key1':'value1'})

# some application code

# end span with optional key-value pairs to report
appoptics_apm.log_exit('my_span', {'key2':'value2'})

Add info events

You can add info events to a span to attach any metadata that may be of interest to you during during later analysis of a trace.

To add info to a span, call appoptics_apm.log anywhere within it and you may do this at multiple points if necessary. The information events are displayed on the raw span data tab on the trace details page. If all of the key/value pairs for special interpretation are present, the extent type is reflected in the span details tab. An example:

def myLogicallySeparateTask(...):
    # start span
    appoptics_apm.log_entry('my_span')

    # attach extra info as key-value pairs to my_span
    appoptics_apm.log('info', 'my_span', {'key1':'value1', 'key2':'value2'})

    # end span
    appoptics_apm.log_exit('my_span')

Report errors and exceptions

Many types of errors are already caught, e.g., web server errors, or any exceptions that reach the django or WSGI top spans. Those that aren’t caught by instrumentation can be reported manually using either appoptics_apm.log_error or appoptics_apm.log_exception. Use the former for any error not necessarily resulting in an exception, use the latter inside of a handler to collect and report information about built-in exceptions. Despite the different applications, both report two key pieces of information—the type of error and an error message—which enable AppOptics to classify the extent as an error extent. Error events are marked in the AppOptics APM dashboard and corresponding information is shown in the errors panel. An example:

try:
    ok = False

    # some application code

    if not ok:
        # report an error condition
        appoptics_apm.log_error('MyErrorCls','unexpected result!')

    call_non_existent()
except Exception as e:
    # report an exception
    appoptics_apm.log_exception()

Starting a Trace

Sometimes out-of-the-box instrumentation might not capture traces on certain systems (custom web framework, batch jobs, etc), but traces can still be created by using the appoptics_apm.start_trace and appoptics_apm.end_trace SDK methods, with an optional load_inst_modules call to instrument supported components, and the optional appoptics_apm.appoptics_ready call to ensure agent readiness.

Check if agent is ready

The agent initializes and maintains a connection to an AppOptics server, and also receives settings used for making tracing decisions. This process can take up to a few seconds depending on the connection. If the application receives requests before initialization has completed, these requests will not be traced. While this is not critical for long-running server processes, it might be a problem for short-running apps such as cron jobs or CLI apps. A call to this method allows the app to block until initialization has completed and the agent is ready for tracing. The method takes an optional timeout value in milliseconds which tells the agent how long to wait for getting ready. The default timeout is 3000, setting timeout to 0 means no blocking.

appoptics_ready(wait_milliseconds=3000, integer_response=False)

By default it returns boolean True (ready) or False (not ready). To get detailed information, set integer_response=True which changes the return value to an integer status code as listed below:

  • 0: unknown error
  • 1: is ready
  • 2: not ready yet, try later
  • 3: limit exceeded
  • 4: invalid API key
  • 5: connection error

Loading component instrumentation

Supported components such as database and RPC clients are loaded automatically when a trace is started by out-of-the-box instrumentation. To instrument these components when a trace is started via the SDK, you can call the load_inst_modules method at the beginning of your instrumented application.

Complete example

An example of using the SDK to instrument a celery task. You can try it out by saving the example code as test_celery.py, installing required dependencies, then on the command line:

  1. start the celery worker: celery -A test_celery worker -P solo &
  2. run this script: python test_celery.py
from celery import Celery
import time
import redis
import requests
import appoptics_apm

# load all supported component instrumentation
from appoptics_apm import loader
loader.load_inst_modules()

app = Celery(
    'test_celery',
    broker = 'redis://localhost/0',
    backend = 'redis://localhost/1'
)

@appoptics_apm.log_method('do_work')
def do_work(x, y):
    # requests and redis calls will be auto-instrumented
    _ = requests.get('https://www.appoptics.com')
    rd0 = redis.Redis(host='localhost', db=0)
    rd0.keys('*celery*')
    rd1 = redis.Redis(host='localhost', db=1)
    rd1.set('foo','bar')
    rd1.get('foo')
    return x + y

@app.task
def add(x, y):
    # check for agent readiness
    if not appoptics_apm.appoptics_ready(5000):
        print('Agent is not ready so the following may not get traced.')

    try:
        # start a trace, and set a transaction name on this task
        appoptics_apm.start_trace('celery_task', keys=None, xtr=None)
        appoptics_apm.set_transaction_name('celery_task_add')

        # work on the task
        result = do_work(x, y)

        # report an error condition
        if result < 0:
            appoptics_apm.log_error('MyErrorCls','unexpected result!')

        return result
    except Exception as e:
        # report an exception
        appoptics_apm.log_exception()
    finally:
        # end the trace
        appoptics_apm.end_trace('celery_task')

if __name__ == '__main__':
    result = add.delay(4, 4)
    print(result.get())

Custom Transaction Name

Our out-of-the-box instrumentation assigns transaction name based on URL and Controller/Action values detected. However, you may want to override the transaction name to better describe your instrumented operation. Take note that transaction name is converted to lowercase, and might be truncated with invalid characters replaced.

If multiple transaction names are set on the same trace, then the last one would be used.

Empty string and null are considered invalid transaction name values and will be ignored.

AppOptics APM provides the following SDKs for reporting custom transaction name.

To override default out-of-the-box transaction name, set transaction name while processing the request:

def my_http_request_handler(request):
    # override the transaction name with a custom one
    appoptics_apm.set_transaction_name('something_meaningful')

You can also set the transaction name on traces started via the SDK call:

def my_service_handler():
    # start the trace and set a transaction name for it
    appoptics_apm.start_trace('root_span')
    appoptics_apm.set_transaction_name('something_meaningful')

    # some application code

    # add HTTP information for this span if this is an HTTP request
    # this information helps segment key performance metrics by status
    # and method
    appoptics_apm.set_request_info(host = 'www.abc.com', status_code = 200 , method = 'GET')

    # end the trace
    appoptics_apm.end_trace('root_span')

Custom Metrics

AppOptics APM provides the following SDKs for reporting custom metrics data.

custom_metrics_increment(name, count, host_tag=False, service_name=None, tags=None, tags_count=0)
custom_metrics_summary(name, value, count=1, host_tag=False, service_name=None, tags=None, tags_count=0)

Note that if tags is given, the tags_count parameter must be set to the count of items in tags. Examples:

# simple usage, no tags
appoptics_apm.custom_metrics_increment("my-counter-metric", 1)

# create a MetricTags object for two tags
tags_count = 2
tags = appoptics_apm.MetricTags(tags_count)
# add tags into MetricTags by specifying the index and tag key and value
tags.add(0, "Peter", "42")
tags.add(1, "Paul", "45")
# submit the metric
appoptics_apm.custom_metrics_increment("my-counter-metric", 1, False, None, tags, tags_count)

Values reported will be aggregated based on the metrics name and tags.

Log Trace ID

You can manually add trace context to your logs if Automatic Insertion is not supported or suitable for your application.

To get a loggable trace context, call appoptics_apm.get_log_trace_id(), which will return a string like '7435A9FE510AE4533414D425DADF4E180D2B4E36-0' for further processing. Note that the function will return '0000000000000000000000000000000000000000-0' when there is no trace context or the agent is running in no-op mode.